market-oriented grids and utility computing: the state-of-the-art and future directions

22
J Grid Computing (2008) 6:255–276 DOI 10.1007/s10723-007-9095-3 Market-oriented Grids and Utility Computing: The State-of-the-art and Future Directions James Broberg · Srikumar Venugopal · Rajkumar Buyya Received: 18 September 2007 / Accepted: 12 December 2007 / Published online: 28 December 2007 © Springer Science + Business Media B.V. 2007 Abstract Traditional resource management tech- niques (resource allocation, admission control and scheduling) have been found to be inadequate for many shared Grid and distributed systems, that consist of autonomous and dynamic distributed resources contributed by multiple organisations. They provide no incentive for users to request resources judiciously and appropriately, and do not accurately capture the true value, importance and deadline (the utility) of a user’s job. Further- more, they provide no compensation for resource providers to contribute their computing resources to shared Grids, as traditional approaches have a user-centric focus on maximising throughput and minimising waiting time rather than maximising a providers own benefit. Consequently, researchers and practitioners have been examining the appro- priateness of ‘market-inspired’ resource manage- ment techniques to address these limitations. Such techniques aim to smooth out access patterns and J. Broberg (B ) · S. Venugopal · R. Buyya Grid Computing and Distributed Systems (GRIDS) Laboratory, Department of Computer Science and Software Engineering, The University of Melbourne, Melbourne, Australia e-mail: [email protected] S. Venugopal e-mail: [email protected] R. Buyya e-mail: [email protected] reduce the chance of transient overload, by pro- viding a framework for users to be truthful about their resource requirements and job deadlines, and offering incentives for service providers to prioritise urgent, high utility jobs over low utility jobs. We examine the recent innovations in these systems (from 2000–2007), looking at the state- of-the-art in price setting and negotiation, Grid economy management and utility-driven schedul- ing and resource allocation, and identify the ad- vantages and limitations of these systems. We then look to the future of these systems, examining the emerging ‘Catallaxy’ market paradigm. Finally we consider the future directions that need to be pursued to address the limitations of the current generation of market oriented Grids and Utility Computing systems. Keywords Market-oriented Grids · Utility computing · Catallaxy 1 Introduction The rise of Grid computing [15] has led to knowl- edge breakthroughs in fields as diverse as climate modelling, drug design and protein analysis, through the harnessing of computing, network, sensor and storage resources owned and adminis- tered by many different organisations. These fields (and other so-called Grand Challenges)

Upload: james-broberg

Post on 15-Jul-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

J Grid Computing (2008) 6:255–276DOI 10.1007/s10723-007-9095-3

Market-oriented Grids and Utility Computing:The State-of-the-art and Future Directions

James Broberg · Srikumar Venugopal · Rajkumar Buyya

Received: 18 September 2007 / Accepted: 12 December 2007 / Published online: 28 December 2007© Springer Science + Business Media B.V. 2007

Abstract Traditional resource management tech-niques (resource allocation, admission control andscheduling) have been found to be inadequate formany shared Grid and distributed systems, thatconsist of autonomous and dynamic distributedresources contributed by multiple organisations.They provide no incentive for users to requestresources judiciously and appropriately, and donot accurately capture the true value, importanceand deadline (the utility) of a user’s job. Further-more, they provide no compensation for resourceproviders to contribute their computing resourcesto shared Grids, as traditional approaches have auser-centric focus on maximising throughput andminimising waiting time rather than maximising aproviders own benefit. Consequently, researchersand practitioners have been examining the appro-priateness of ‘market-inspired’ resource manage-ment techniques to address these limitations. Suchtechniques aim to smooth out access patterns and

J. Broberg (B) · S. Venugopal · R. BuyyaGrid Computing and Distributed Systems (GRIDS)Laboratory, Department of Computer Science andSoftware Engineering, The University of Melbourne,Melbourne, Australiae-mail: [email protected]

S. Venugopale-mail: [email protected]

R. Buyyae-mail: [email protected]

reduce the chance of transient overload, by pro-viding a framework for users to be truthful abouttheir resource requirements and job deadlines,and offering incentives for service providers toprioritise urgent, high utility jobs over low utilityjobs. We examine the recent innovations in thesesystems (from 2000–2007), looking at the state-of-the-art in price setting and negotiation, Grideconomy management and utility-driven schedul-ing and resource allocation, and identify the ad-vantages and limitations of these systems. We thenlook to the future of these systems, examiningthe emerging ‘Catallaxy’ market paradigm. Finallywe consider the future directions that need to bepursued to address the limitations of the currentgeneration of market oriented Grids and UtilityComputing systems.

Keywords Market-oriented Grids ·Utility computing · Catallaxy

1 Introduction

The rise of Grid computing [15] has led to knowl-edge breakthroughs in fields as diverse as climatemodelling, drug design and protein analysis,through the harnessing of computing, network,sensor and storage resources owned and adminis-tered by many different organisations. Thesefields (and other so-called Grand Challenges)

256 J. Broberg et al.

have benefited from the economies of scalebrought about by Grid computing, tackling diffi-cult problems that would be impossible to feasiblysolve using the computing resources of a singleorganisation.

Despite the obvious benefits of Grid comput-ing, there are still many issues to be resolved.With increasing popularity and usage, large Gridinstallations are facing new problems, such as ex-cessive spikes in demand for resources coupledwith strategic and adversarial behaviour by users.Such conditions have been observed on Planet-Lab [8, 30], which while not specifically considereda Grid, is one of the largest open-access distrib-uted systems of its kind, and Mirage [12], a sharedsensornet testbed, among other examples.

In both systems, bursts of high contention—when demand for available resources exceedstheir supply—have been observed frequently.Spikes in PlanetLab load often correspond closelyto deadlines for major computing and network-ing conferences, when disparate research teamsare competing for resources to run their experi-ments [30]. Mirage is one of very few real sen-sor network testbeds available, and is of primeinterest to commercial and academic researchersacross the globe wishing to explore the behaviourof their algorithms on a real system [27]. As aresult, it can be difficult to get access to this systemat peak times.

In such situations, traditional resource manage-ment techniques (resource allocation, admissioncontrol and scheduling) have been found by manyresearchers to be lacking in ensuring fair and equi-table access to resources [9, 22, 36, 40]. Traditionalresource management techniques for clusters fo-cus on metrics such as maximising throughput,and minimising mean waiting time and slowdown.These metrics fail to capture the more subtlerequirements of users, such as quality of ser-vice constraints. Consequently, researchers havebeen examining the appropriateness of ‘market-inspired’ resource management techniques for en-suring that users are treated fairly. To achieve this,there needs to be incentives for users to be flexibleabout their resource requirements and job dead-lines, and to utilise these systems outside of peaktime. Similarly, where possible, there also needs tobe provisions to accommodate users with urgent

work. These aims are achieved by users assigninga utility value to their jobs—effectively a fixed ortime-varying valuation that captures various qual-ity of service constraints (deadline, importance,and satisfaction) associated with a user’s job. Thistypically represents the amount they are willingto compensate a service provider to satisfy theirjob demands. Shared Grid systems are thenviewed as a marketplace, where users compete forresources based on the perceived utility or valueof their jobs.

The notion of utility is not restricted to end-users alone, especially in commercial Grid andcluster systems. As is the case in free markets,all participants (described in Section 2.1) are self-interested entities that attempt to maximise theirown gain. Service providers in commercial systemswill attempt to maximise their own utility. In thisinstance, the utility they receive may directly cor-relate with the profit they make from the differ-ence between the cost of offering the service andthe compensation received, giving them incentiveto participate in Grid markets. A provider canchoose to prioritise high yield (i.e. profit per unitof resource) jobs from users, so that its own utilityis maximised.

Furthermore, one or many brokers can op-erate in Grid systems, acting as middlemen be-tween end-users and service providers. They canperform a number of important functions, suchas aggregating resources from many providers,negotiating and enforcing quality of service tar-gets, and reducing the complexity of access forend-users. As compensation for performing thesevalue-added functions, brokers generate utility forthemselves by selling access to resources at ahigher cost than what they pay a service provider,thereby generating a profit for themselves.

In this paper we examine many recent ad-vances in the field of utility-oriented Grid com-puting markets. There have been prior surveys ofmarket-based resource management [44], utilitycomputing [45] and Grid economics [11]. Thissurvey is intended to complement these, by cov-ering the most recent research in Grid computingmarkets. The remainder of the paper is organisedas follows: We provide an overview of market-based approaches for utility-driven distributedsystems in Section 2, describing the benefits of

Market-oriented Grids and utility computing 257

utility-driven computing and explaining the roleof the markets in Grid computing. The state-of-the-art in market-based utility computing is pre-sented in Section 3, highlighting the benefits ofutility-driven pricing, resource allocation, schedul-ing and admission control. Section 4 describes theso-called ‘Catallaxy’ paradigm, which has movedthe research focus from stand-alone Grid mar-ketplaces to multiple, linked, decentralised andautonomic ‘free markets’. In Section 5, we con-sider the future directions that need to be pursuedto address the limitations of current generationof market oriented Grids and Utility Computingsystems. Finally, we summarise the contribution ofthis paper in Section 6.

2 Overview of Market-Based Approachesfor Utility-Driven Distributed Systems

As networked resources such as Grids, clusters,storage and sensors are being aggregated andshared among multiple stake-holders with oftencompeting goals, there has been a rising con-sensus that traditional scheduling techniques areinsufficient. Indeed, simply aiming to maximiseutilisation for service providers, and minimisewaiting time and throughput for end-users doesnot always capture the diverse valuations thatparticipants in these systems place on the success-ful execution of jobs and services. Therefore, thenotion of maximising the utility of each partici-pant in these system is fast gaining more impor-tance. The behaviour, roles and responsibilitiesof each of these participants as well as how theyeach measure utility are explored in Section 2.1.An overview of the motivation behind market-drive utility computing and the obstacles that needto be overcome is presented in Section 2.2. InSection 2.3, we look at some of the emerging tech-nology trends that are allowing service providersunprecedented flexibility in partitioning and al-locating their resources for computing market-places.

2.1 Participants

Participants, or ‘actors’ in a utility-oriented GridComputing markets can be generally classified

as belonging to one of three categories: users,brokers or service providers. In some instances,participants may actually perform the functionsfrom more than one category—a users may alsooffer some of its own resources to other partic-ipants (acting as a service provider), or a ser-vice provider may aggregate other resources alongwith its own (acting as a broker). Each actor inthe system is a self-interested, utility maximisingentity. How each actor measures and maximisesits utility depends on the system it operates in—the behaviour exhibited in a shared system wheremarket-driven techniques are used simply to reg-ulate access differs greatly from a profit-drivencommercial system. Many of these differences arehighlighted from our study of existing systems inSection 3.

A user requires access to more resources thanit has available, for requirements that can rangefrom jobs to be processed as interdependent tasks(e.g. scientific work-flows [46]) to web servicesthat need to be run on networked computers.These users are willing to compensate a providerto satisfy their requirements depending on theutility they receive. The sophistication of this com-pensation depends on the system being used, asoutlined in Section 3.1.

Rather than dealing with multiple heteroge-neous service providers directly, users obtainaccess to resources via one or many brokers.These brokers act as ‘middlemen’ by virtualisingand making available resources of multiple serviceproviders thereby shielding the user from thecomplexity of multiple architectures, operatingsystems and middleware. This aggregation allowsgreater economies of scale, improving throughputand reducing cost. Brokers can have more specificresponsibilities, such as reserving resource slots,scheduling jobs and services on resources held byservice providers, performing admission controlto avoid overload and ensuring that a user’squality of service requirements can be met.

Service providers satisfy the needs of end-usersby providing the resources (i.e. disk, memory,CPU, access to sensors, etc.) requested by endusers. These may be provided directly or maybe arbitrated by brokers. A service provider mayoffer its services through multiple brokers, andeven offer more resources than it has available, in

258 J. Broberg et al.

an effort to improve utilisation via statistical mul-tiplexing. A service provider is ultimately respon-sible for ensuring that all its commitments are met.It must enforce appropriate performance isolationfor jobs and services running on its resources, toensure that users’ quality of service targets aresatisfied.This can be achieved by appropriate par-titioning or scheduling of its resources.

2.2 Utility Computing and the Roleof Market-Based Techniques

Lai [22] attempts to motivate the use of market-based techniques for resource assignment, whilstexploring the pitfalls of such approaches. Lainotes that, over a long time frame, assigning re-source shares in proportion to user demands isnot sufficient to reach economic efficiency, asthere is no incentive for users to shift their usagefrom high demand periods to low demand periods.Also, with variable demand, simple fixed priceschemes are not as efficient as variable prices inaddressing transient peak loads. Indeed, the morevariable the demand the greater the efficiencyloss of a fixed price scheme. When these con-ditions are combined with a controlled pool offunds, users have an incentive to truthfully revealtheir valuation of tasks. Finer grained bidding forresources can occur, where users can choose be-tween reserved resources or best effort schedul-ing, depending on necessity.

Shneidman et al. [36] have noted importantemerging properties of these systems, where usersare self-interested parties that both consume andsupply resources, demand often exceeds resourcesupply, and centralised (global) approaches can-not be used due to the sheer scale of these sys-tems. The authors claim the goal is no longermaximising utilisation, especially when demandexceeds supply, and more intelligent allocationtechniques are needed than just best effort orrandomised allocation. They propose that a ef-ficient allocation mechanism (i.e. social policy)that can allocate resources is needed that allocatesresources to users who have the highest utilityfor them, favouring small experiments (a com-mon practice in scheduling) and underrepresentedstake-holders (e.g. those that have been denied or

refrained from receiving service in the past), andmaximises revenue.

The authors note that many recent develop-ments make market-based techniques appropriateand timely as they could be immediately deployedin many commercial scenarios, such as test-bedsand shared Grids, to solve real world problems.Developments in Virtual Machine technology(Xen [6], VMWARE, etc.) and other resourceisolation techniques (Linux Class-based KernelResource Management, BSD Jails [33]) can beused to partition, isolate and share resources.Combinatorial bidding languages are emergingthat are sufficiently expressive, and such bids canbe often solved as mixed integer optimisationproblems in modern solver packages likeCPLEX [28].

Shneidman et al. [36] state that there are anumber of problems that need to be solved tomake effective market-based, utility-driven ap-proaches a reality. Resource allocation policiesmust be explicitly defined, particularly, the socialpolicy goals that need to be met. However, wenote that it is unclear at this point as to whoamong the users, the administrators, and the ser-vice providers should mandate these goals. Sellersof resources must be able to efficiently divideresources into both explicit and implicit resourcebundles. However, whilst tools to achieve this (e.g.Xen, CKRM) are maturing, they still have someway to go. Buyers’ needs also must be predicted sothat the amount of required resources are not mis-estimated, leading to excess resources that havebeen reserved and not used, or wasted due tounfinished experiments that may be invalid.

Shneidman et al. [36] also note that forecast-ing resource requirements is challenging and newfor users. It is unclear whether the market canresolve this, or whether ‘practice’ runs in a besteffort staging grounds are needed so that users cangain experience predicting their needs. However,such staging grounds may be impractical in reality,and is in opposition to the free market approachof ‘letting the market work it out’ by allowingusers to suffer the consequences of poor choices.Users need to find the true value of resources theywish to reserve. The currency system needs to bewell defined, and receive ongoing care so that itfunctions correctly to maximise the utility of the

Market-oriented Grids and utility computing 259

system, avoiding the typical problems of starva-tion (users run out of money), depletion (usershoard currency or leave the system) and infla-tion (currency is injected into the system withoutlimit). The use of real money is favoured by manyresearchers to encourage more honest valuationsof resources in the system. An effective meansto calculate and express valuation of resources isneeded for efficient operation of these systems.Bidding languages are well suited to this task andare becoming more sophisticated. However, intu-itive interfaces are needed to construct these bids,allowing users to easily and accurately expresstheir preferences.

Linked market-based mechanisms have beenproposed by Shneidman et al. [36] as an inter-esting possibility, quantifying the value of clustertime at one network versus another (in a similarfashion to currency exchanges). This is com-plementary with the move from centralised ap-proaches to Catallaxy-inspired [3, 14] distributedand autonomic systems, described in Section 4.

2.3 Emerging Technologies for ComputingMarketplaces

The recent growth in popularity of broadly acces-sible virtualisation solutions and services providesan interesting framework where, instead of a ‘task’or process, a lightweight virtual machine (VM)image can be the unit of execution and migra-tion [26]. Whilst virtual machines and containershave existed for decades, they have only recentlybecome ubiquitous due to new capabilities (suchas paravirtualisation) in recent Intel and AMDprocessors, that extend virtualisation on commod-ity computers and operating systems from ex-pensive ‘big-iron’ Unix machines. These recentdevelopments have allowed migration of unmodi-fied applications encapsulated in virtual machines.This can be achieved in a work-conserving fashionwithout the running application nor any depen-dant clients or external resources being aware thatit has occurred. Virtualisation solutions such asVMWARE make this possible by encapsulatingthe state of the VM, such as CPU, networking,memory, and I/O while the virtual machine isstill running. Open network connections can betransferred as well, due to the layer of indirection

provided by the VMWARE Virtual Machinelayer. In terms of state, physical memory is oftenthe largest overhead to be migrated. Stopping aVM to save and transfer this state can cause alengthy downtime, making the migration processfar from transparent. As such, this is handledin situ by a virtual memory layer [38], allowingmemory state to be transferred whilst a VM is stillrunning by iteratively pre-copying memory to thedestination node, and marking it as temporarilyinaccessible on the source.

3 The State-of-the-Art in Market-BasedUtility Computing

Utility-driven distributed computing market-places are typically made up of several keycomponents and processes. Users can have jobsthat need to be processed, for which they arewilling to compensate an entity. The level ofcompensation depends on a number of factors,including the currency available to the client,the contention for resources and the urgencyof the job. These factors are considered in theprice setting and negotiation phase, described inSection 3.1.

As described earlier, one or many brokers canexist in a Grid marketplace, acting as ‘middlemen’by aggregating and virtualising access to disparateGrid resources offered by service providers. In theface of competition from many clients for Gridresources, brokers and service providers mustmanage these resources effectively, meeting de-mand where possible but ultimately maximisingthe utility (i.e. revenue) gained. This is achievedvia utility-driven resource allocation, schedulingand admission control, described in Section 3.2.

For a Grid marketplace to be sustainable, theGrid economy must be managed effectively. Ifusers have an unlimited supply of currency, thereis no rewards or disincentives for users to managetheir currency and use it prudently and truthfully,revealing their true valuation for jobs. As such,the currency, whether real or virtual, must be care-fully managed to ensure equity for all participants.This is discussed in further detail in Section 3.3.

Some systems address many or all of the aboveaspects of market-based, utility-driven distributed

260 J. Broberg et al.

systems. Therefore, we will first introduce thesesystems below:

Buyya et al. [1, 10, 11] proposed an economydriven resource management architecture forglobal computational Grids. This consists of ageneric framework, called GRACE (Grid Archi-tecture for Computational Economy) for tradingresources dynamically, in conjunction with exist-ing Grid components such as local and Grid ormeta-schedulers. The function of GRACE is toenable supply and demand-driven pricing of re-sources to regulate and control access to compu-tational resources in a Grid.

Bellagio [4] is a system that seeks to allocate re-sources for distributed computing infrastructuresin an economically efficient fashion to maximiseaggregate end-user utility. In this system, usersidentify resources of interest via a resource dis-covery mechanism, such as SWORD [29], andregister their preference (via a constrained supplyof virtual currency) for said resources over timeand space using combinatorial auction bids [28].Unlike existing work that focuses on contentionfor a single resource (CPU cycles), they are mo-tivated by scenarios where users express interestin ‘slices’ of heterogeneous goods (e.g. disk space,memory, bandwidth).

Chun et al. [12] propose a so-called microeco-nomic resource allocation scheme using combi-natorial auctions within a closed virtual currencyenvironment, for sensornet testbed resource man-agement, called Mirage. The operation of Mirageis very similar to that of Bellagio however, as itis a real-world deployed system, observations ofthis system have revealed many users behavingstrategically to exploit the system.

Tycoon [21, 23] is a market-based resourceallocation system that utilises an Auction Sharescheduling algorithm where users trade off theirpreferences for low latency (e.g. for a web server),high utilisation (e.g. batch computing) and risk.

Libra [35] is a computational economy-basedscheduling system for clusters that focuses on im-proving the utility, and consequently the qualityof service, delivered to users. Libra is intendedto be implemented in the resource managementand scheduling (RMS) logic of cluster computingsystems, such as PBS [17]. Libra+$ [41] is an ex-tension that dynamically prices cluster resources

by considering the current workload to regulatesupply and demand, aiming for market equilib-rium. In addition, LibraSLA [42] incorporates ser-vice level agreements (SLA) by considering thepenalty of not meeting a SLA into the admissioncontrol and scheduling decisions in a cluster.

FirstPrice and FirstReward are two utility-driven scheduling heuristics that balance the riskof future costs against the potential gains (reward)for accepting tasks [20]. The importance of usingadmission control in such schemes is also illus-trated. Tasks in such systems are batch jobs thatutilise resources that are predicted accurately bythe client, but provide no value until completion.Each job is characterised by a utility function thatgives the task a value as a function of its com-pletion time. This value reduces over time, and ifunbounded it can be a negative value (i.e. penalty)placed on the service provider. These systemsdo not model the currency economy or injection(unlike Bellagio and Mirage). Sealed bids are usedand no price signals to buyers are assumed.

Popovici and Wilkes [32] examine profit-based scheduling and admission control algo-rithms called FirstProfit and FirstOpportunity thatconsider a scenario where service providers rentresources from resource providers who than runand administering them. Clients have jobs thatneed processing with price values (specifically, autility function) associated with them. The ser-vice provider rents resources from the resourceprovider at a cost, and the price differential isthe job’s profit that goes to the service provider.Complicating matters is the fact that resourceuncertainty exists, as resources may be over-provisioned. If the service provider promises re-sources they cannot deliver, the clients’ QoStargets will not be met and the price they willpay will decline, as defined by the client utilityfunction. It is assumed that service providers havesome domain expertise and can reasonably predictrunning times of jobs in advance.

3.1 Price Setting and Negotiation

In a market-driven distributed system, users ex-press the value or ‘utility’ they place on successfuland timely processing of their jobs through theprocess of price setting and negotiation. Users can

Market-oriented Grids and utility computing 261

Table 1 Summary of price setting and negotiation

System name Price setting Acceptance criteria Penalty

Nimrod-G [1] Fixed Posted price NoG-commerce [39] Fixed Current price > mean_price N/ABellagio [4] Fixed ‘Threshold’-based N/AMirage [12] Fixed ‘Winner determination problem’ N/ATycoon [21, 23] Fixed First/second price NoLibra [35] Fixed Minimum cost NoLibra+$ [41] Fixed Deadline/budget feasible Yes (LibraSLA [42])Li [24] Fixed nth Price NoFirstPrice [13] Variable None NoFirstReward [20] Variable Risk versus reward YesFirstProfit [32] Variable Risk versus per-job reward YesFirstOpportunity [32] Variable Affect on profit YesAggregate utility [5] Variable Contract feasibility/profitability Yes

attempt to have their job prioritised and allocateda greater proportion of resources by increasing theprice they are willing to pay to a service providerfor the privilege. Conversely, users can endeavourto save their currency when processing low pri-ority tasks by specifying flexible deadlines. Suchpreferences are captured by the price setting andnegotiation mechanisms of a given Grid comput-ing market. These price setting and negotiationapproaches are presented below, and summarisedin Table 1.

3.1.1 Fixed Price Models

The GRACE [10, 11] negotiation system doesnot specifically prescribe a particular price model,claiming it should be left up to each participantto define their own utility-maximising strategieswhen negotiating for access to resources. How-ever, the interactions described in preliminarywork on GRACE closely resemble a double auc-tion [10], where ‘asks’ and ‘bids’ are traded thatcan incorporate costs and flexible deadlines (totrade-off against cost) until a fixed agreementis struck, or the negotiation is abandoned. Suchdouble auctions were subsequently been foundto be highly efficient by Placek and Buyya [31],and were integrated into Storage Exchange, aplatform that allows organisations to treat storageas a tradeable resource. GRACE resource trad-ing protocols were also utilised in the Nimrod-GResource Broker [1] to define the access prices

for globally distributed resources. These pricesare utilised by Nimrod-G to schedule complexcomputations, were results can be optimised fortime (within a budget) or to minimise costs (whilstmeeting a deadline).

The G-commerce system [39] examines a sim-ulated computational Grid economy in order tocompare the efficacy of two market strategies—commodities markets and auctions—in terms ofprice stability, market equilibrium, consumer ef-ficiency and producer efficiency. Wolski et al.examined workload conditions for both under-demand (supply > demand) and over-demand(demand > supply) for both market strate-gies, finding that commodities markets led tosignificantly more stable pricing than auctions.The commodities markets also exhibited higherthroughput and utilisation than auctions underconditions of under and over demand, lead-ing the authors to conclude that commoditymarkets are preferable over auctions for Grideconomies.

As well as submitting typical parameters re-quired by batch computing job submission systemssuch as estimated runtime E, location of data setsand expected output, the Libra [35] system allowsusers to express more descriptive requirements byusing parameters such as deadline D and budgetB. A deadline denotes when the user needs theresults of the job by, and the budget denotes howmuch they are willing to pay to have the job com-pleted by the deadline. The Libra RMS makes a

262 J. Broberg et al.

simple minimum cost computation for a submittedjob, to see if a user’s budget B is sufficient to coverthe cost, C = α ∗ E + β ∗ E/D, where α and β arethe coefficients. The α parameter captures the rawcost of the resources required and the durationthey are needed for, while β denotes the incen-tive offered to the user for specifying an actualdeadline. This ensures that a user is charged afixed price for the cluster hours it uses, regardlessof deadline, while considering the user’s flexibilityregarding the processing deadline and compensat-ing or penalising them accordingly. This providesa good mix of user and provider satisfaction.

The Libra+$ [41] extends the existing Librasystem to consider a system where the prices ofcluster resources vary dynamically based on thecurrent workload to regulate supply and demand.Whilst the prices for resources may vary overtime, a user that has an accepted job will pay afixed price based on the current price in the mar-ketplace for the particular resources they request.

Market-inspired systems such as Bellagio [4]and Mirage [12] utilise second-price and first-priceauctions respectively to schedule and allocate re-sources. In each case, users specify a fixed valuebid for a set of resources over time and space.Auctions are held at fixed intervals (e.g. everyhour for Bellagio), and users are either notified ifthey are successful and are allocated the requestedresources, or are denied access if unsuccessful.Users are encouraged to bid their true value inthis approach, as they must wait until the nextauction to try again, potentially with a modifiedbid to improve their chances of success. Users onlyhave limited means to express the utility of theirjob, via the fixed price they are willing to pay, theduration of time they need the resources for, andthe deadline it must be completed by.

The Tycoon [21, 23] system similarly uses auc-tions, but removes the onus on the user to ad-just their bids based on their success at auction.For each application a user wants to run, a so-called ‘parent-agent’ is created which manages thebudget and the associated child-agents. A userspecifies high-level parameters such as the numberof credits available to that parent, the deadlineand the number of hosts needed. A child agentinitiates the bidding process, potentially liaisingwith multiple providers. It monitors the progress

of these negotiations and reports it up the chain.An auctioneer computes efficient first or secondprice bids for CPU slices based on the funds andinformation presented by the child nodes interact-ing with it. Funds can be received upon entry intothe system, and also at regular intervals, but thiselement or its effect is not explored further.

Li et al. [24] propose an iterative gradientclimbing price setting approach to balance supplyand demand in agent-driven Grid marketplaces.In this approach, the participants iteratively es-tablish prices using a gradient climbing adaptationsuch that supply meets demand, using an nth priceauction scheme. While each bid uses a fixed price,future bids are automatically modified based oncurrent market conditions. If a price is rejected,an agent subsequently increases its bid; if it isaccepted it reduces its bid. In this manner, agentsin the market act independently to maximise theirown individual profits.

3.1.2 Variable Price Models

In the FirstPrice/FirstReward [20] and FirstProfit/FirstOpportunity [32] job scheduling systems,each users job is characterised by a utility functionthat gives the task a value as a function of its com-pletion time. This value reduces over time, and ifunbounded can be a negative value (i.e. penalty)placed on the service provider. Jobs provide novalue to the user until they are completed. Eachjob’s deadline is equal to its minimum runtime,so any delay incurs an immediate cost. After thispoint, the compensation that the provider receivesdecays linearly as time progresses past thedeadline.

AuYoung et al. [5] propose using aggregateutility functions in conjunction with individualutility functions for sets of jobs, described as a‘contract’. An aggregate utility function is pro-posed in the form of aggregate_utility = αxβ . Fora contract, the overall payment is the sum ofthe per-job prices multiplied by the value of theaggregate utility function. The parameters can bechanged to reflect the user’s preference in havinga full set of tasks processed. When α is 1 and β

is 0, the user is indifferent as to whether a partialor compete set is processed. When α is 1 and β

is 1, there is a linear relationship between the

Market-oriented Grids and utility computing 263

level of completeness in processing the set andthe aggregate utility function. When α is 1 andβ increases, the user places a higher dependencyon the majority of the jobs completing in a set,penalising the provider if this is not the case. β

controls the client’s sensitivity to the aggregatemetric—a high β and α value would result in highrisk for the service provider, but high reward ifthey can satisfy as much of a contract (i.e. set) aspossible—getting a “bonus”.

3.2 Utility-Driven Resource ManagementSystems (RMS)

Utility-driven resource management systems(RMS) actively consider the utility of participantswhen performing resource allocation, schedulingand admission control. Some well-known utility-aware resource management systems are presented below, and summarised in Table 2.

3.2.1 Auction-Based Resource Allocation

Bellagio: A ‘second-price’ style auction is em-ployed in Bellagio [4] to encourage users to revealthe true value they place on these resources. Com-binatorial auctions are then held every hour toallocate resources. Given that clearing such com-binatorial auctions is known to be NP-Complete,approximation algorithms are used to determinethe winner. The Bellagio system assumes thatan authentication entity exists that authenticatesbids, resource capabilities (reservations) and

account balances, such as SHARP [16]. Thevirtual currency economy is managed such thatone user cannot dominate the available resourcesfor lengthy periods of time. This currency systemis explained in further detail in Section 3.3.

Experimental results show that Bellagio scaleswell to clearing auctions involving thousands ofbids and resources. Bellagio generates more utilityfor its users than a standard proportional shareresource allocation under a variety of load condi-tions. Bellagio is especially effective under highersystem load, spreading out resource requests inorder to maximise overall utility. The fairness ofthe Bellagio policy (protecting light users againststarvation from heavy users) can be ensured byusing an appropriate currency distribution policy.

Mirage: The resource allocation problem forsensornet testbeds (such as Mirage [12]) is wellsuited to combinatorial auctions, as resources areboth substitutable (i.e. specific machine alloca-tion is often not important) and complimentary(partial request allocation is not useful, but fullallocations are complementary to each other).Mirage can locate and reserve resources based onper-node attributes (e.g. a certain feature neededat each node) but not inter-node attributes (e.g.node is 10 m from other node). The bid formatlanguage used by Mirage for users to express theirpreferences is based on XOR [28], allowing themto request resources over both space and time,with the valuation and deadline flexibility of their

Table 2 Summary of utility-driven resource management systems

System name Allocation Scheduler Adm. control

Bellagio [4] Combinatorial auction-based Proportional share N/AMirage [12] Auction-based Proportional share N/ATycoon [21, 23] Auction-based Auction share NoLibra [35] N/A Proportional share Yes (basic)Libra+$ [41] N/A Proportional share YesLi [24] Double auction Proportional share N/AFirstPrice [13] N/A Highest unit value first NoFirstReward [20] N/A Highest unit value firsta YesFirstProfit [32] N/A Highest per-job profit first YesFirstOpportunity [32] N/A Highest total profit first YesAggregate utility [5] N/A Highest contract profit first Yes

aRisk-aware.

264 J. Broberg et al.

bid contributing to the probability of a success-ful bid.

While monitoring the system over a period of4 months, the authors observed that the valuesusers placed on resources varied over four ordersof magnitude, validating the auction-based ap-proach. Users also requested a range of resourceallocations and durations, highlighting the flexi-bility of the bidding language in allowing usersto share access to the system where appropriate,and therefore improving utilisation. However, asthe authors observed a live system, it was notcompared against any baseline scheduler, so theincrease in overall utility was not quantified.

Tycoon: The Tycoon [21, 23] system utilises anAuction Share scheduling algorithm where userstrade off their preferences for low latency, highutilisation and risk—depending on their specificrequirements. Many schedulers in similar systemsutilise proportional share weighted by the impor-tance of that process. As the authors note thismethod can be abused, as it depends on userstruthfully valuing their process, and as the sumof weights increases each processor’s share ap-proaches zero. The Tycoon approach encouragesusers to devise specialised strategies for each spe-cific application they need to run, based on theirneeds. Tycoon utilises an agent-driven approach,which removes much of the complexity (and, wenote, the control) in devising effective market-driven strategies for users’ processes.

When a market strategy is adapted to propor-tional scheduling, it performed well under highutilisation—equivalent to a best case proportionalscheduling scenario where users value their jobstruthfully. When no market principles are usedand users value their tasks strategically, utilityconverges to zero as the load increases.

An auction strategy attempts to provide thebest elements of many approaches—high utilisa-tion (proportional share), low latency (borrowedvirtual time share) and low risk (resource reser-vation). Users specify their preference based onapplication-specific needs. In contrast, a propor-tional share scheduler can only hope to achievehigh utilisation or low latency/fairness, but notboth, and users are not encouraged to value theirtasks truthfully. The limited results demonstrate

that an Auction share approach can be effectivein ensuring predictable latency and also be fair byyielding the CPU when not needed.

Combinatorial Exchange: Schnizler et al. [34]propose to formulate the resource trading andallocation problem as a multi-attribute combi-natorial exchange model. Like many other Grideconomy researchers, the authors claim that tra-ditional (e.g. FCFS) allocation and schedulingare not sufficient in market driven computationalGrids, as they fail to capture the utility (i.e.true valuation) that users place on the successfulprocessing of their jobs. Flat rate pricing schemesfor Grid economies are also insufficient, as theydo not capture the variability in demand and userutility that exists. The authors list certain desirableproperties that any pricing and allocation schemeshould have in an economy driven Grid:

– Allocative efficiency: Pareto efficiency dic-tates that any allocation mechanism must en-sure that one agent is made better off withoutmaking at least one agent worse off. If userutility is equivalent and transferable, then amechanism should maximise the sum of indi-vidual utilities.

– Incentive compatible: All participants mustreport their preferences (i.e. valuations) truth-fully.

– Individual Rationality: Requires that usersparticipating in a mechanism have a higherutility than before they joined—otherwisethey have no incentive to participate.

– Budget balance: A mechanism is budget bal-anced if the amount of prices sum to zero overall participating agents. This can be achievedby a closed loop currency system where pay-ments are redistributed among the agents,with no funds removed nor injected from out-side. Weak balanced budget occurs when par-ticipants make payments to the mechanism,but not vice versa.

– Computational/communication tractability:The cost of computing the outcome ofallocating resources in an optimal or nearoptimal fashion must be considered, as wellas the communication effort that is needed toconverge on these goals.

Market-oriented Grids and utility computing 265

The underlying environment must also considerthe following domain specific requirements:

– Simultaneous trading: Support of multiplebuyers and multiple sellers.

– Trading dependant resources: Buyers demandcombinations (‘bundles’) of resources, andmay bid on many such bundles at once thoughXOR bids.

– Support for multi-attribute resource: A singleresource may have multiple resources that areof interest to a buyer (e.g. a HDD has capacityand access time)

Existing market-based approaches do not ac-count for time or quality constraints in the bid-ding and auction process. As such, the authorsintroduce a bidding language to express the auc-tion process for a computation Grid, capturingthe pertinent elements of a users bid—such asthe makeup and quality requested for a specificresource bundle, and the valuation they place onit. The problem is formulated as a generalisationof the combinatorial allocation problem, which isknown to be NP-complete. The objective func-tion attempts to maximise the surplus V∗, whichis the sum of the difference between the buy-ers’ valuations and the sellers’ reservation prices.A Vickrey–Clarke–Groves pricing mechanism isthen assumed. The proposed approach is simu-lated, and its computational tractability is shown(by solving the auctions using the well-knownCPLEX solver), clearly demonstrating that theoptimisation problem does not scale. The simu-lation shows an exponential relationship betweenthe number of orders and the CPU time neededto compute the results of the auction. A scenariowith 140 bundles comprising 303 bids took nearlya minute to compute, highlighting the critical needfor a faster approximate solution to clear auctions.

3.2.2 Scheduling

Libra: Libra’s proportional share approachallowed more jobs to be completed by theirdeadline (and consequently less jobs rejected) fora variety of cluster and workload combinations.However, such proportion share scheduling is

not appropriate for all workloads—specifically,memory intensive jobs would suffer due toexcessive context switching. Furthermore, Libra’sapproach critically depends on the quality of theruntime estimation to schedule jobs and resourceseffectively. However, accurate runtime estimatescannot be guaranteed for many batch or clusterworkloads [25, 37]. Furthermore, the operationof Libra depends on several limiting assumptions;that only one centralised gateway is used tosubmit jobs to the cluster, that there must beno background jobs (the cluster nodes are dedi-cated), that CPU time is divisible and reservable,and that the estimated runtime E is accurate.

FirstPrice and FirstReward: FirstPrice [20] sortsand greedily schedules jobs based on a sched-ule that will maximise per unit return. FirstRe-ward [20] considers both the per unit return foraccepting a task and the risk of losing gains inthe future. It utilises a present value (PV) ap-proach that considers the yield countered by adiscount_rate on future gains that could be made.A high discount_rate causes the system to discount(avoid) future gains and focus on short term jobsto obtain quick profit which is a risk adverse strat-egy. Experiments have demonstrated that a highdiscount_rate is appropriate when there is a largeskew between valuations of jobs in a schedule.The integration of opportunity costs for the valueof the potential loss incurred by taking one jobover another. Ideally, jobs should only be deferred(to chase extra profit) if they have a low decayrate, even if they have a high unit gain. The rewardmetric is a tuneable metric that considers potentialgain with opportunity costs, that can be weightedto control the degree of which the system consid-ers expected gain. When penalties are bounded,it is beneficial to take some risk in order to gainmore profit—the heuristic biases against low valuejobs, and the penalty is capped. When the penaltyis uncapped, taking considerable risks on futuregain starts to become a poor strategy.

FirstOpportunity and FirstOpportunityRate:Popovici and Wilkes propose two profit-basedscheduling algorithms [32]. The first of theseis FirstOpportunity, which examines the effect

266 J. Broberg et al.

of running a job on others in the queue. First-Opportunity builds a new schedule for pendingworkloads by selecting each job in turn (usingonly the most profitable shape for that job) anduses FirstProfit to generate a schedule for theremaining jobs. Finally, the job is chosen thatgenerates the schedule with the highest totalprofit. The second algorithm is FirstOpportu-nityRate, which is similar to FirstOpportunitybut considers the job that produces the highestaggregate profit in proportion to the totalschedule length.

When accurate resource information is avail-able, FirstReward and FirstPrice have a higheracceptance rate and utilisation under low load,as they do not consider resource cost, only profit.FirstOpportunity and FirstOpportunityRate havethe most accurate profit estimation (at admissioncontrol stage) under all shown load conditionsas they consider existing jobs as well as thenewly arriving job. Under more severe decayrates, FirstOpportunityRate has consistentlybetter performance and accuracy, as it prioritisessmaller, higher profit jobs. Under variableresource availability and price, FirstOpportunityand FirstOpportunityRate are the best policiesas they consider resource costs as well as price,whereas other policies schedule on client utilityor job size alone.

When uncertain resource information is avail-able, FirstProfit was extended to consider profitreward versus probability of achieving that profit,whilst the other policies are uncertainty-oblivious.FirstProfit admitted fewer jobs when uncertaintyincreased, reducing its risk exposure (i.e. accept-ing jobs then not meeting client targets). How-ever, profit rate was increased when FirstProfittook slightly more risk (20%).

Aggregate Utility: The use of aggregate utilityfunctions in conjunction with traditional, singlejob utility functions is explored by AuYounget al. [5]. These aggregate functions are used toreflect an end-user’s preference for a “set” of jobsto be completed in or close to its entirety, and theresulting utility they place on such needs. Thesejob sets or contracts are assumed to describe theindividual jobs (number of jobs, size, arrival rates)and their utility accurately, and are well behaved.

A job’s value is equal to its utility function at themoment it completes, or its maximum penalty ifcancelled.

3.2.3 Admission Control

FirstPrice and FirstReward: The use of admis-sion control combined with FirstPrice andFirstReward [20] is valuable especially underconditions of high system load. A decision canbe made regarding whether it is beneficial toaccept a task before the system accepts it bylooking at how it integrates into the existing taskmix. A slack metric is utilised that considers theamount of additional delay it can impose on a jobbefore it becomes unprofitable to accept it. Highslack jobs are desirable as they can be potentiallyrescheduled if a more profitable job comes along.

PositiveOpportunity: PositiveOpportunity [32]is the admission control component of First-Opportunity and FirstOpportunityRate. Positive-Opportunity computes a job schedule based onmatching scheduling policy, for all tasks includingthe new job (with all possible allocations for thenew job enumerated), then all tasks not includingnew job. The admission control algorithm thenchooses most rewarding scheduling. If job is insaid schedule, it is admitted.

Aggregate Utility: Admission control can also bedone for both contracts (sets of jobs) and individ-ual jobs [5]. When new contracts arrive, feasibilityand profitability tests are performed to ensure thatit is worth a providers while. Admission controlis also done at the per-job level with each job’sindividual utility function. However, the utilityfunction includes a bias that reflects the intrinsicvalue of the job when processed as part of the setit belongs to.

Libra: The Libra RMS has some rudimentaryadmission control functionality [35]. The providerwill check whether the budget is sufficient andif there is a cluster node that can feasibly meetthe prescribed deadline. If no node can meetthe deadline, the job is rejected. In this case, auser should try again later or try a more relaxeddeadline. By doing this, Libra can guarantee an

Market-oriented Grids and utility computing 267

accepted job will be processed before its dead-line, provided the runtime estimate E is accurate.LibraRisk [43] addresses one of the key weak-nesses of Libra by enhancing admission control,considering delays caused by inaccurate runtimeestimates. Each newly arriving job is examined togauge the risk of that job causing a deadline tobe exceeded and the resultant delay to the otherjobs queuing in the cluster. Specifically, the risk ofdelay is computed for each node in the cluster, anda job is only accepted if it can be accommodatedon a node such that there is zero risk of delay.Experimental results showed that in the face ofinaccurate user runtime estimates, LibraRisk ac-cepted and completed more jobs than Libra, whilemaintaining better slowdown across a variety ofworkload conditions. Libra+$ [41] has enhancedadmission control capabilities over Libra, consid-ering three criteria when admitting jobs; that theresources required are available, that the deadlineset is feasible and that the budget to finish thejob within the deadline is sufficient to cover thecosts of the resources. However, unlike LibraRisk,Libra+$ does not consider the risk of inaccurateruntime estimations. LibraSLA [42] takes this fur-ther by considering the effect of accepting a newjob on the service level agreements (SLAs) ofexisting jobs, weighing up the additional value thata new job earns versus the penalty of breachingan existing jobs SLA and being liable to paya penalty.

3.3 Managing the Grid Economy

Managing the Grid economy effectively is crucialto the success of any market-driven Grid comput-ing system. Indeed, without some form of con-straint and scarcity in the economy, there is littleincentive for users to value and bid on their jobstruthfully. With unbounded currency, users wouldsimply bid the largest amount possible with unre-alistic deadlines regardless of the relative impor-tance of their jobs, with no thought of rationingout their currency for later use. If all users bid inthis fashion, the market-based approach becomesredundant and offers no benefit over traditional‘fair’ scheduling.

There are two major considerations for manag-ing the Grid economy. The first is the nature of the

currency accepted in the economy. Currencies canbe virtual, where the currency system is valid onlywithin a particular Grid economy, or they can bereal, where the money supply is obtained from orlinked to currency markets accepted in the worldeconomy. The advantages and disadvantages ofboth approaches are discussed in Section 3.3.1.The second consideration is how the currency inquestion is managed—that is, how it is injected,taxed, controlled and dispersed. This is describedin Section 3.3.2.

3.3.1 Virtual vs. Real Currency

A well-defined currency system is essential toensure that resources are efficiently shared andallocated in a Grid computing marketplace. Theuse of virtual or real currencies has respective ad-vantages and disadvantages. Shneidman et al. [36]found that most deployed computational mar-kets use virtual currency, due to its low risk andlow stakes in case of mismanagement or abuse.However, they note that virtual currency has itsdrawbacks, due to requiring careful initial andongoing management of the virtual currency econ-omy, which is often ignored due to the low stakesinvolved. There is also a critical lack of liquidityand flexibility in virtual currencies, with users un-able to ‘cash-out’ or transfer their credits to othersystems easily, if at all.

These approaches tend to be most appropriatefor single marketplaces, such as researchers fromone organisation sharing a HPC cluster. In thisinstance, it is simply a means to regulate access,especially under high contention, and encouragesocially responsible behaviour from users. Whilstthey represent fairly simplistic marketplaces, cur-rencies still need to be managed appropriately forany benefit to be realised over traditional schedul-ing and resource allocation techniques.

The use of real currency is appealing for a num-ber of reasons. In some computational market-places, users must contribute their own resources(e.g. CPU, disk, etc.) in order to be granted creditsto access the wider shared resources. This can beprohibitive for some, who may want shorter termaccess and do not wish to be permanent partici-pants in a Grid. Allowing these users to buy accessusing real currency can lower the bar of entry, and

268 J. Broberg et al.

increase overall utilisation and revenue for thesystem. Real currency formats (e.g. USD, Euro,etc.) are universally recognised, easily transfer-able and exchanged, and are managed outside thescope of a Grid marketplace, by linked free mar-kets and respective government policy. However,many of the same problems that exist in real-worldeconomies apply to computational marketplaces.It is entirely possible for one user to be deniedaccess to a system (effectively priced out of themarket) by another user with more money. Thedistribution of wealth held by users will have acritical effect on the a computational Grid’s acces-sibility, affordability and utilisation. As such, care-ful consideration must be given when choosingbetween virtual and real currency systems. For ashared system, where the aim is equality of access,a real currency system may be inappropriate. Fora profit-driven commercial Grid marketplace, aneconomy based on real currency would likely bethe most appealing option.

3.3.2 Currency Management and Dispersal

A ‘GridBank’ [7] has been proposed by Barmoutaand Buyya for providing accounting services forcomputing resources, thereby enabling them tobe shared and bartered between multiple admin-istrative domains. After a client’s Grid ResourceBroker (GRB) negotiates with a Grid ServiceProvider (GSP) to set the price and amount of re-sources needed, they issue a GridCheque throughthe GridBank (subject to sufficient funds) to thecharging module of the GSP. The client then sub-mits their job to the GSP for execution. Uponcompletion, the charging module on the GSP con-tacts the GridBank server to redeem the chequebased on the original price negotiated. It is notedthat the GridBank service is mostly focused to-ward a co-operative model, where participants areboth producers and consumers, and are allocateda certain number of credits upon joining the sys-tem. The aim in this scenario is price equilib-rium but no specific measures, such as currencymanagement to encourage certain behaviours, areimplemented to ensure that this occurs.

The Bellagio [4] system offers some rudimen-tary controls on the distribution of wealth in thevirtual economy—such as when to inject currency

into the system and how much. This subsequentlycontrols the share of resources received by partic-ipating users. The total amount of currency thatcan be accumulated by a given user is boundedto reduce chance of abuse. This avoids situationssuch as a user hoarding currency for lengthy pe-riods of time, then dominating the available re-sources causing starvation.

The Mirage [12] system offers more sophis-ticated currency management features. Some ofits novel features (over Bellagio) include propor-tional share profit sharing, where proceeds fromcleared auctions are distributed proportionally toidle users to accumulate ad-hoc credit, and a sav-ings tax to address unbalanced usage patterns andavoid excessive accumulation of credit. Indeed,over time a user’s credit regresses back to a base-line amount. The central bank and economy ismore sophisticated than Bellagio as users havestandard currency as well as currency ‘shares’,which affect the proportion of currency distrib-uted back to idle users after an auction clears.Mirage also features a proportional savings taximposed on the currency held by users, known asthe ‘use it or lose it’ policy.

Irwin, Chase et al. propose a self-rechargingvirtual currency system for shared computing in-frastructure such as test-beds like PlanetLab [8]and Intel SensorWeb [18]. They aim to addressthe so-called ‘tragedy of the commons’ where in-dividual interests compete against the commongood of all participants. They are primarily mo-tivated by its use in a Cereus-based system forservice-oriented computing. The authors proposerecycling currency through the economy, whileartificially capping the rate of spending by users,seeking to avoid currency hoarding and starvationthat can occur in other market-based systems.The economy critically depends on an account-able, third party verifiable ‘claim’ system such asSHARP [16] to manage currency and resources.

In this system, credits recharge after a fixedrecharge time from the time they are spent (orcommitted to a bid). This is in an effort to avoidhoarding and starvation, but as the authors noteif the recharge time is too short, this system re-sembles a simple random lottery. If the rechargetime is too long, it resembles a typical moneyeconomy with all the endemic problems typically

Market-oriented Grids and utility computing 269

associated. Credit for a user is capped at a fixedbudget of c—users cannot hold nor bid more thanc credits at any given time.

The fundamental issue in this proposal is whento recharge credits spent on winning bids, as thiswill significantly affect user behaviour. Existingwork recharges credits when the purchased con-tract has expired, encouraging users to bid forshort-term gratification rather than biding theirtime and bidding into the future. A Cereus systemenforces a fixed interval recharge that occurs aftera fixed time r after the user commits credit toa bid, encouraging early bidding for auctions inthe future. Once committed, these credits are un-available to users—they cannot bid on concurrentauctions with money they do not have.

However, without results it is unclear if this ap-proach actually maximises end-user utility. Con-sider a group of users, each with c credits biddingon a single auction. All users bid c credits, andbid early. In a Cereus system, the broker acceptsall bids, and returns a proportional amount ofresources to each end-user, effectively negatingthe useful features of market-based scheduling.We propose that a bounded, randomly distributedrecharge time could potentially diffuse this groupbehaviour. It is also unclear how the recharg-ing currency system interacts with multiple con-current auctions, or with the length of contractsawarded to end-users.

3.3.3 Trust Models and Accounting

SHARP is a framework for secure resource man-agement (resource discovery and allocation) forwide-area, decentralised networked computingplatforms [16]. SHARP provides participants with‘claims’ on shared computing resources, which areprobabilistic (i.e. not guaranteed) promises overresources for a specified time period. These claimscan be utilised, subdivided or on-sold at the whimof the claim holder. These claims form the basis ofa decentralised resource peering, trading, and bar-tering system, with each claim being authorisedby a chain of self-signed delegations anchored inthe site authority of the resource provider of thatclaim.

SHARP is particularly suited to shared net-works that can partition their resources into easily

divisible units (slices), such as virtual machines.A local scheduler at each resource has the ulti-mate responsibility for enforcing any successfulresource allocation claim. The resource claimsthemselves are split into two phases. In the firstphase, a service manager (acting on behalf ofa user who needs resources) obtains a ‘ticket’,representing a soft claim, that represents a proba-bilistic claim on a specific resource for a period oftime. In the second phase, the ticket must be con-verted into a concrete reservation by contactingthe site authority for the resources and requestinga ‘lease’. A ‘lease’ is a hard claim which is guar-anteed valid, except in the instance of a resourcefailure. The separation of the claims process intotwo phases allows the site authority to considercurrent conditions (e.g. load) when determiningwhen and how to redeem the claim, or even toreject the claim outright.

Claims in SHARP may be oversubscribed inthat the agent acting on behalf of a resource hasissued more tickets than it can support. This canimprove resource utilisation by statistical multi-plexing, but it also means that the claims them-selves are probabilistic, and do not guaranteeaccess. The probability of a claim being success-fully converted into a lease naturally depends onthe level of over-subscription. In the event of aclaim being rejected, the victim can seek recourseby presenting the rejected claim to the issuingagent, so that the responsible agent can compen-sate them. In the instance that a resource is un-dersubscribed, the resource claims are effectivelyhard reservations.

Of course, SHARP does not function as anentire resource management system. Particularly,the onus is on resource providers to manage andenforce allocations created and held by partici-pants that were generated by SHARP in the formof tickets and leases. As SHARP is intended foruse with systems without global trust, where re-sources claims can be arbitrarily divided and givenor on-sold to anonymous third parties, securityand abuse is a large problem. The extent of thisproblem is largely dependent on the platformSHARP is deployed on, and its ability to isolateusers from one another, and to track and preventabuse. SHARP mitigates some of this risk byonly allowing access (but not control) to slices of

270 J. Broberg et al.

resources for fixed periods of time. If an abusiveuser is detected their access can be revoked. Thespecifics of this revocation are again left to thelocal resource managers themselves.

Motivated by the SHARP leasing frame-work, Irwin et al. [19] extend this further withSHIRAKO, a generic and extensible system foron-demand leasing of shared network resources.SHIRAKO brings dynamic logic to the leasingprocess, allowing users to lease groups ofresources from multiple providers over multiplephysical sites, through the services of resourcebrokers. SHIRAKO enhances the functionalityof SHARP by adapting to dynamic resourceavailability and changing load conditions throughautonomous behaviour of the ‘actors’ in thesystem. In particular, SHIRAKO allows ‘flexible’resource allocation through leases which can bere-negotiated and extended via mutual agree-ment, which was not possible using SHARP alone.This removes the need to obtain a new lease forany residual demand that exists once an existinglease expires, reducing resource fragmentationand continuity. Additional logic is provided formaking resource leases more flexible for users. Arequest can be defined as ‘elastic’ to specify a userwill accept fewer resources if its full allocationis not available. Requests can be ‘deferrable’ ifa user will accept a later start time than what isspecified in the lease if that time is unavailable.Request groups are also defined, allowing co-scheduling. A set of tickets can be assigned to a re-quest group, ensuring (where possible) that eachrequest is satisfied within a common time window.

4 Catallaxy Market Architectures

In the last 3 years there has been an increasedresearch focus on Austrian economist F.A. vonHayek’s notion of a ‘Catallaxy’ and applyingit to market-driven Grid computing. The ideaof ‘Catallactics’ considers markets where pricesevolve from the actions of economically self-interested participants. Each participant tries tomaximise their own gain whilst having limitedinformation available to them.

Eymann et al. [14] have investigated the issuesand requirements of implementing an electronic

Grid market based on the concept of ‘Catal-laxy’, a ‘free market’ economic self-organisationapproach. However, they found that solving thisproblem is a complex multi-attribute allocationproblem, found to be NP-complete in previouswork by other researchers in the field. The authorsnote that in interrelated markets, the allocation ofresources and services in one market invariably in-fluences the outcomes in the other market. Theseinteractions should occur without relying on atraditional centralised broker. Participants shouldbe self-organising and follow their own interest,maximising their own utility. A catallaxy approachworks on the principle that there are autonomousdecentralised agents, which have constant negoti-ation and price signalling occurring between them.Indeed, changing conditions (availability, compe-tition) on the resource market will be reflected bycascading price changes that reflect the respectivescarcity and demand for a resource. Participantsmust read these signals and react accordingly.

The principles of the catallaxy as applied tocomputation Grids are stated by Eymann et al.[14] as follows:

1. Participants work for their own self-interest;each element is a utilisation maximising entity.

2. Entities do not have global knowledge; theycan only act on information as it is madeavailable to them. They must adapt to con-stantly changing signals from downstream andupstream entities.

3. The market is simply a communications bus;price changes (e.g. increases and decreases)will dictate whether an entity looks for alter-native sources to procure a specific resource,making the market dynamic.

A prototype model for the ‘Catallaxy’ was pre-sented by the authors, consisting of an applicationlayer and a service layer. In the application layer,complex services are mapped to basic services.The service layer maps service requests to ac-tual resources provided by local resource man-agers. The potential scenarios for existing Gridmarketplaces at the service layer typically fallinto three categories; Contracting resources inadvance, contracting resources after finalising the

Market-oriented Grids and utility computing 271

service contract with client and contracting re-sources during negotiation.

When contracting resources in advance, futuredemand must be forecast. It is a centralised andstatic solution, and can lead to poor utilisation.Contracting resources after finalising the servicecontract with a client can be risky, as insuffi-cient resources could be available. Contracting re-sources during negotiation is highly desirable, as auser can choose between multiple providers basedon cost, and can also adapt to changing prices,reducing risk and ensuring the market is balanced.The authors use the latter approach to form thebasis of a Catallaxy inspired Grid market.

The design of the service discovery mechanismis also critical. Eymann et al. [14] claim that cen-tralised registries are not appropriate due to thedecentralised nature of buyers and sellers. Dis-tributed Hash Tables may be suitable but lackscalability in dynamic networks due to the stateconstantly changing, leading to high communica-tion and computational overhead.

Eymann et al. [14] additionally note that thechoice of auction strategy used is also importantin a ‘Catallaxy’ driven Grid marketplace. Thesecan include fixed prices, Dutch auctions, Englishauctions and Double auctions. However, we notethat any auction approach chosen must be ap-propriate for the scale and dynamism of linked‘Catallaxy’ inspired Grid marketplace, completingrapidly with minimal computation and communi-cation overhead.

Ardaiz et al. [3] explore a decentralised, eco-nomic and utility-driven resource allocationapproach for resource allocation in a Grid mar-ketplace. In such environments, each participant(client, service providers and resource providers)independently tries to maximise their own utilityby following a self-interested strategy. A decen-tralised approach is evaluated for Grids of var-ious densities and reliabilities, and is comparedagainst a centralised, statically optimal approach.In effect there are two markets operating—onefor resources, where service providers procureresources from resource providers, and one forservices, where clients procure services from ser-vice providers. As such, the client is not aware ofthe particulars of the resource provider, and vice

versa. However, the service provider participatesand tries to maximise utility in both markets.

An experimental framework is presented inorder to simulate varying Grid markets underrealistic settings with a topology following thatof the Internet. There are highly connected, highbandwidth and compute power centres (e.g. HPCfacilities) and also smaller resources available to-ward the regional centres and edges of the system.The various strategies and tendencies for negotia-tion in the two markets are reflected probabilisti-cally from a previous study of the catallaxy. Therange valuations and pricing limits that partici-pants place on their negations are very narrow,which is inconsistent with previous observationsin deployed Grid marketplaces, where 4 orders ofmagnitude difference has been observed in valua-tion, with little correlation to the service require-ments needed [27]. Each request unit also requiresan equal amount of computation and bandwidthpower, and does not reflect the mix of data andcompute intensive jobs that are found in Gridcomputing workloads.

Under high load, high density Grids, Ardaizet al. found that the total utility and resourceefficiency decreased for the baseline and distrib-uted approaches, due to difficulties in makingsuccessful allocations caused by high contentionfor resources. Response time is worse for thedistributed approach due to the flooding discoverymechanism used to find resources. As Grid densityincreases, it takes longer for the distributed catal-lactic approach to find and allocate resources, asthe discovery time increases due to resources be-ing situated on the edges of the network. A smallincrease is seen in the baseline approach due tothe need to monitor a larger number of resources.As the network becomes more dynamic and proneto failure, the catallactic approach shows betterresponse time, as it is more resistant to poor al-location choices and can re-negotiate with otherresources in place of failed resources.

5 Discussion

Based on our study of existing utility-drivenand market based distributed computing architec-tures, we consider the necessary salient features

272 J. Broberg et al.

of a next generation, general purpose and ad-hocutility computing marketplace. We are motivatedby the emerging ‘Catallaxy’ paradigm, where mul-tiple linked computing marketplaces, each consist-ing of self-interested, utility-maximising entities,interact and evolve depending on market condi-tions. Participants in these decentralised systemsare constantly adapting to current conditions de-pending on the market signals. In this section wediscuss how to potentially address many of thelimitations of existing utility computing systems,in a manner that complements the ‘Catallaxy’view of the next generation of computing market-places.

Many existing systems (such as Bellagio [4],Mirage [12], etc.) have restrictive price setting andnegotiation policies. Auctions are held at fixedintervals, and only one type of auction is allowed(e.g. First Price, Second Price). Like GRACE[10, 11], we believe that the choice of negotiationand pricing protocols should be left up to eachparticipant in the system. This is crucial as thechoice of pricing (fixed, variable) and negotiationprotocol (auction, one-to-one, agent driven, etc.),with the permutations and combinations thereof,can have an enormous effect on the utility gainedby the participants depending on the current mar-ket conditions. Issues such as resource scarcity (orglut), economic conditions, time constraints, andthe number of participants (i.e. competition) willresult in very different results for each combina-tion of pricing and negotiation.

Management of the computational economybecomes considerably more complicated whenmany disparate resource providers are linked to-gether in the same “marketplace”. Most of thesystems studied in this paper are closed economieswhere money is distributed and recycled throughthe system. The currency is typically virtual, notreal, and as such has the associated strengths andweaknesses described in Section 3.3.1. Most no-tably, virtual currencies lack liquidity and flexibil-ity, but offer low stakes and low risk for serviceproviders in the event of abuse. To utilise thesesystems, users must earn credit through either in-kind contribution of resources, or via periods ofinactivity and judicious use of the available ser-vices. Unfortunately, this can raise the barrier ofentry, making it unsuitable for a user who needs

to access the service immediately, despite the factthey could be willing to financially compensate aservice provider if the option was available.

Most commercial service providers offer theirresources in exchange for real currency (e.g USD,Euro, etc). In these system the currency market isopen, well defined and liquid, and addresses someof the access issues that plague virtual currencysystems. Subject to availability, users can simply‘buy in’ to get access to a providers’ resources.Conversely, systems utilising real currency canface many of the same issues that occur in realmarketplaces, where access to systems can be de-termined by who has the most money, and thedistribution of wealth held by users can be heavilyskewed.

A next generation broker is needed to man-age interaction and interoperability between thesetwo very different service paradigms. It is notappropriate to simply dictate that one style ofcurrency is used. For many distributed systems,Grids and testbeds, the intent of the system is toprovide equitable access for as many people aspossible. These systems may have been createdwith government or grant funds for greater socialbenefit, and as such any attempt to profit fromthese systems would be inappropriate, beyond ba-sic cost recovery. In this instance, the constrainedvirtual currency is simply a means to regulateaccess to the system. For commercial providers,profit is the overriding motive, so virtual currencyis not an appropriate means of compensation forthe services it offers.

A next generation broker in an ad-hoc market-place should act as a ‘middleman’ between theuser and one or many service providers. Theseproviders can be either commercial or open accessin nature. The broker is essentially a value-addedservice, aggregating access to multiple providersthat may not necessarily be accessible to an end-user. In the case of open access systems that utilisevirtual currency tied to a particular system, a bro-ker could ‘earn’ access for later use, via in-kindcontribution of resources, or simply accumulatingcredit by joining the system and remaining idle.The broker can then utilise this credit when end-users request access to these systems, removingthe onus on the user to join or contribute tothat particular system and giving them immediate

Market-oriented Grids and utility computing 273

access. Such access could be procured from a bro-ker for real currency (where the cost is dictatedby the broker), or by mutual agreement a usercould offer virtual credit at a different comput-ing facility in exchange for access to the serviceprovider offered by the broker. For instance, wecould envisage a scenario where a user offers abroker X access credits from Mirage [12] for Yaccess credits at Bellagio [4] (depending on therelative worth, negotiated by the user and thebroker). Such trading arrangements could also bemade between different brokers in a marketplace.

Resource reservation, allocation and serviceguarantees largely depend on the service provider.The service provider can choose whether it offers‘hard’ or ‘soft’ guarantees, and whether compen-sation is given for missed deadlines or lack ofservice. A broker must then decide if it is sat-isfied with these guarantees when reselling theservices of a provider. In the event that a serviceprovider continually fails to meet agreed qualityof service targets, a broker can seek compensation(if available) or simply terminate it’s arrangementwith that provider, and find another. The brokeritself could potentially over-subscribe access to agiven service provider by selling more resourcesthan it has acquired or reserved from a provider.This can be risky as a provider (which has itsown resource allocation and admission controlpolicies) can simply refuse a request for accessto resources, leaving the broker liable for over-promising on resources it does not have. In thisinstance, a user could seek compensation from abroker (if available), or simply find another, morereliable system.

A reputation system would also complementfuture ad-hoc utility computing systems, wherecontinual offenders could earn a poor reputation,allowing others to avoid them where possible.Conversely, good service could be rewarding witha positive reputation, attracting more customersfor service providers and brokers alike. Partici-pants (i.e. users, brokers and service providers)could track this individually, based on their ownexperiences, or through a co-ordinated, decen-tralised registry.

As discussed in Section 2.3, the emergenceof widely available, commodity virtual machinetechnology has simplified administration and

allocation of resources for service providers.Virtual machine infrastructure such as Xen [6] andVMWare [2] have allowed these providers to par-tition resources such as memory, disk space andprocessor cores into virtual machines with arbi-trary capabilities, with minimal overhead. This canallow greater efficiencies for providers throughimproved statistical multiplexing when hostingmultiple virtual machines.

Despite this, brokers could still play an impor-tant role in VM-driven computing marketplaces.They can provide numerous value-added services,such as providing pre-made virtual machine im-ages for common tasks, or base images for users totrivially build upon and add their own applicationlogic. A user could utilise such base images tocreate their own custom, self-contained containerfor execution, along with any data, libraries andapplications that are needed for the duration ofexecution. This removes a significant amount ofcomplexity for users, removing the need for themto ensure the relevant data, libraries and applica-tions are available on the target execution envi-ronment hosted by a service provider.

6 Conclusion

In this work, we have evaluated the state-of-the-art in market-driven utility computing platforms.We provided an overview of the key compo-nents of these platforms, identifying the roles andresponsibilities of the participants, the effect ofmarket-based techniques and the emerging tech-nological advances that are propelling market-driven utility computing platforms forward. Weexamined the state-of-the-art in utility-orientedcomputing markets, identifying the strengths andweakness of current related systems and ap-proaches to pricing, negotiation, resource andeconomy management. The Catallaxy approach tocomputing marketplaces was highlighted, show-ing the benefits of this new decentralised, utilitymaximising framework in addressing the utilitycomputing problem. We discussed how we canaddress the limitations of many existing utilitycomputing systems, and identify areas thatneed further attention for these approaches toflourish. As market-oriented utility computing

274 J. Broberg et al.

concepts mature, and resource partitioning andisolation technology emerge that allow nearly ar-bitrary division of resources, it is becoming clearthat ‘market-inspired’ resource management pro-vides a compelling alternative to traditional re-source management. Indeed, the rapid emergenceof service oriented computing facilities (such asAmazon’s Elastic Compute Cloud) combinedwith expanding industrial uptake has increasedthe mindshare of ‘on-demand’ computing market-places. However, it is clear that significant workneeds to be done to move from isolated Gridmarkets toward fully linked, interoperable and ad-hoc marketplaces. This is essential to reap thefull benefits of Grid computing, by improving theeconomies of scale and allowing users to harnessresources from numerous administrative domainswith ease.

Acknowledgements This work was supported under theDepartment of Education, Science and Training (DEST)International Science Linkages (ISL) project, “The Util-ity Grid Project: Autonomic and Utility-Oriented GlobalGrids for Powering Emerging e-Research Applications”.

References

1. Abramson, D., Buyya, R., Giddy, J.: A computationaleconomy for Grid computing and its implementationin the Nimrod-G resource broker. Future Gener. Com-put. Syst. 18(8), 1061–1074 (2002)

2. Adams, K., Agesen, O.: A comparison of software andhardware techniques for x86 virtualization. In: ASP-LOS ’06: Proceedings of the 12th International Confer-ence on Architectural Support for Programming Lan-guages and Operating Systems, pp. 2–13. ACM Press,New York, NY, USA (2006)

3. Ardaiz, O., Artigas, P., Eymann, T., Freitag, F.,Navarro, L., Reinicke, M.: The catallaxy approach fordecentralized economic-based allocation in Grid re-source and service markets. Appl. Intell. 25(2), 131–145(2006)

4. AuYoung, A., Chun, B., Snoeren, A., Vahdat, A.:Resource allocation in federated distributed comput-ing infrastructures. In: OASIS ’04: Proceedings of the1st Workshop on Operating System and ArchitecturalSupport for the Ondemand IT InfraStructure (2004)October

5. AuYoung, A., Grit, L., Wiener, J., Wilkes, J.: Servicecontracts and aggregate utility functions. In: HPDC’06: Proceedings of the 15th IEEE International Sym-posium on High Performance Distributed Computing,pp. 119–131 (2006) June

6. Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris,T., Ho, A., Neugebauer, R., Pratt, I., Warfield, A.: Xenand the art of virtualization. In: SOSP ’03: Proceedingsof the Nineteenth ACM Symposium on Operating Sys-tems Principles, pp. 164–177. ACM Press, New York,NY, USA (2003)

7. Barmouta, A., Buyya, R.: Gridbank: A Grid account-ing services architecture (gasa) for distributed systemssharing and integration. In: ICEC ’03: Workshop onInternet Computing and E-Commerce, Proceedings ofthe 17th International Symposium on Parallel and Dis-tributed Processing, pp. 245.1. IEEE Computer Soci-ety, Washington, DC, USA (2003)

8. Bavier, A., Bowman, M., Chun, B., Culler, D.,Karlin, S., Muir, S., Peterson, L., Roscoe, T., Spalink,T., Wawrzoniak, M.: Operating system support forplanetary-scale network services. In: NSDI ’04: Pro-ceedings of the Symposium on Networked SystemsDesign and Implementation (2004) March

9. Buyya, R.: Economic-based distributed resource man-agement and scheduling for Grid computing. Ph.D.thesis, Monash University (2002) April

10. Buyya, R., Abramson, D., Giddy, J.: Economy drivenresource management architecture for computationalpower Grids. In: PDPTA ’00: Proceedings of the7th International Conference on Parallel and Distrib-uted Processing Techniques and Applications (2000)June

11. Buyya, R., Abramson, D., Venugopal, S.: The Grideconomy. Proc. IEEE 93(3), 698–714 (2005) March

12. Chun, B., Buonadonna, P., AuYoung, A., Ng, C.,Parkes, D., Shneidman, J., Snoeren, A.C., Vahdat, A.:Mirage: a microeconomic resource allocation systemfor sensornet testbeds. In: EMNETS ’05: Proceedingsof the 2nd IEEE Workshop on Embedded NetworkedSensors (2005) May

13. Chun, B.N., Culler, D.E.: User-centric performanceanalysis of market-based cluster batch schedulers. In:CCGRID ’02: Proceedings of the 2nd IEEE/ACM In-ternational Symposium on Cluster Computing and theGrid, p. 30. IEEE Computer Society, Washington, DC,USA (2002)

14. Eymann, T., Reinicke, M., Streitberger, W., Rana, O.,Joita, L., Neumann, D., Schnizler, B., Veit, D., Ardaiz,O., Chacin, P., Chao, I., Freitag, F., Navarro, L.,Catalano, M., Gallegati, M., Giulioni, G., Schiaffino,R.C., Zini, F.: Catallaxy-based Grid markets.Multiagent Grid Systems 1(4), 297–307 (2005)

15. Foster, I., Kesselman, C.: The Grid: Blueprint fora New Computing Infrastructure. Morgan-Kaufmann(2001)

16. Fu, Y., Chase, J., Chun, B., Schwab, S., Vahdat, A.:Sharp: an architecture for secure resource peering.SIGOPS Oper. Syst. Rev. 37(5), 133–148 (2003)

Market-oriented Grids and utility computing 275

17. Henderson, R.L.: Job scheduling under the portablebatch system. In: IPPS ’95: Proceedings of theWorkshop on Job Scheduling Strategies for ParallelProcessing, pp. 279–294. Springer-Verlag, London, UK(1995)

18. Irwin, D., Chase, J., Grit, L., Yumerefendi, A.: Self-recharging virtual currency. In: P2PECON ’05: Pro-ceedings of the 2005 ACM SIGCOMM Workshop onEconomics of Peer-to-peer systems, pp. 93–98. ACMPress, New York, NY, USA (2005)

19. Irwin, D.E., Chase, J.S., Grit, L.E., Yumerefendi, A.R.,Becker, D., Yocum, K.: Sharing networked resourceswith brokered leases. In: USENIX ’06: Proceedings ofthe USENIX Annual Technical Conference, GeneralTrack, pp. 199–212 (2006)

20. Irwin, D.E., Grit, L.E., Chase, J.S.: Balancing risk andreward in a market-based task service. In: HPDC ’04:Proceedings of the 13th IEEE International Sympo-sium on High Performance Distributed Computing, pp.160–169. IEEE Computer Society, Washington, DC,USA (2004)

21. Huberman, B.A., Lai, K., Fine, L.: Tycoon: A Distrib-uted Market-based Resource Allocation System. Tech-nical Report arXiv:cs.DC/0404013, HP Labs, Palo Alto,CA, USA (2004) April

22. Lai, K.: Markets are dead, long live markets. SIGecomExchanges 5(4), 1–10 (2005)

23. Lai, K., Rasmusson, L., Adar, E., Zhang, L.,Huberman, B.A.: Tycoon: an implementation of adistributed, market-based resource allocation system.Multiagent Grid Systems 1(3), 169–182 (2005)

24. Li, C., Li, L., Lu, Z.: Utility driven dynamic resourceallocation using competitive markets in computationalGrid. Adv. Eng. Softw. 36(6), 425–434 (2005)

25. Mu’alem, A.W., Feitelson, D.G.: Utilization, pre-dictability, workloads, and user runtime estimates inscheduling the IBM SP2 with backfilling. IEEE Trans.Parallel Distrib. Syst. 12(6), 529–543 (2001)

26. Nelson, M., Lim, B.-H., Hutchins, G.: Fast transpar-ent migration for virtual machines. In: USENIX ’05:Proceedings of the 2005 USENIX Annual TechnicalConference, pp. 391–394 (2005)

27. Ng, C., Buonadonna, P., Chun, B.N., Snoeren, A.C.,Vahdat, A.: Addressing strategic behavior in adeployed microeconomic resource allocator. In:P2PECON ’05: Proceedings of the 2005 ACMSIGCOMM Workshop on Economics of Peer-to-peerSystems, pp. 99–104. ACM Press, New York, NY,USA (2005)

28. Nisan, N.: Bidding and allocation in combinatorial auc-tions. In: EC ’00: Proceedings of the 2nd ACM Confer-ence on Electronic Commerce, pp. 1–12. ACM Press,New York, NY, USA (2000)

29. Oppenheimer, D., Albrecht, J., Patterson, D., Vahdat,A.: Design and implementation tradeoffs for wide-arearesource discovery. In: HPDC ’05: Proceedings of the14th IEEE International Symposium on High Perfor-mance Distributed Computing. IEEE Computer Soci-ety, Washington, DC, USA (2005)

30. Peterson, L., Anderson, T., Culler, D., Roscoe, T.: Ablueprint for introducing disruptive technology into theinternet. Comput. Commun. Rev. 33(1), 59–64 (2003)

31. Placek, M., Buyya, R.: Storage exchange: a global trad-ing platform for storage services. In: EUROPAR ’06:Proceedings of the 12th International European Par-allel Computing Conference. Springer-Verlag (2006)August

32. Popovici, F.I., Wilkes, J.: Profitable services in anuncertain world. In: SC ’05: Proceedings of the2005 ACM/IEEE conference on Supercomputing,pp. 36. IEEE Computer Society, Washington, DC,USA (2005)

33. Sarmiento, E.: Securing FreeBSD using jail. Syst. Ad-min. 10(5), 31–37 (2001)

34. Schnizler, B., Neumann, D., Veit, D., Weinhardt, C.: Amultiattribute combinatorial exchange for trading Gridresources. In: RSEEM ’05: Proceedings of the 12thResearch Symposium on Emerging Electronic Markets(2005)

35. Sherwani, J., Ali, N., Lotia, N., Hayat, Z., Buyya, R.:Libra: a computational economy-based job schedulingsystem for clusters. Software Practice and Experience34(6), 573–590 (2004)

36. Shneidman, J., Ng, C., Parkes, D.C., AuYoung, A.,Snoeren, A.C., Vahdat, A., Chun, B.N.: Why marketscould (but don’t currently) solve resource allocationproblems in systems. In: USENIX ’05: Proceedings ofthe 10th USENIX Workshop on Hot Topics in Operat-ing Systems (2005) June

37. Tsafrir, D., Etsion, Y., Feitelson, D.G.: Modelinguser runtime estimates. In: JSSPP ’05: Proceedings ofthe 11th International Workshop on Job SchedulingStrategies for Parallel Processing, pp. 1–35. Cambridge,MA, USA (2005)

38. Waldspurger, C.A.: Memory resource management invmware esx server. In: OSDI ’02: Proceedings of the5th Symposium on Operating Systems Design and Im-plementation, pp. 181–194. ACM Press, New York,NY, USA (2002)

39. Wolski, R., Plank, J.S., Brevik, J., Bryan, T.: Analyz-ing market-based resource allocation strategies for thecomputational Grid. Int. J. High Perform. Comput.Appl. 15(3), 258–281 (2001) Fall

40. Wolski, R., Plank, J.S., Brevik, J., Bryan, T.:G-commerce: Market formulations controllingresource allocation on the computational Grid.In: IPDPS ’01: Proceedings of the 15th InternationalParallel and Distributed Processing Symposium. IEEE,San Francisco (2001) April

41. Yeo, C.S., Buyya, R.: Pricing for utility-driven resourcemanagement and allocation in clusters. In: ADCOM’04: Proceedings of the 12th International Confer-ence on Advanced Computing and Communications,pp. 32–41. Allied Publishers: New Delhi, India (2004)December

42. Yeo, C.S., Buyya, R.: Service level agreement basedallocation of cluster resources: Handling penalty to en-hance utility. In: CLUSTER ’05: Proceedings of the

276 J. Broberg et al.

7th IEEE International Conference on Cluster Com-puting. IEEE Computer Society: Los Alamitos, CA(2005) September

43. Yeo, C.S., Buyya, R.: Managing risk of inaccurate run-time estimates for deadline constrained job admissioncontrol in clusters. In: ICPP ’06: Proceedings of the2006 International Conference on Parallel Processing,pp. 451–458. IEEE Computer Society, Washington,DC, USA (2006)

44. Yeo, C.S., Buyya, R.; A taxonomy of market-basedresource management systems for utility-driven cluster

computing. Software Practice and Experience 36(13),1381–1419 (2006)

45. Yeo, C.S., Buyya, R., de Assuncao, M.D., Yu, J.,Sulistio, A., Venugopal, S., Placek, M.: Utility com-puting on global Grids. In: The Handbook of Com-puter Networks. John Wiley & Sons, New York, USA(2007)

46. Yu, J., Buyya, R.: Scheduling scientific workflow ap-plications with deadline and budget constraints us-ing genetic algorithms. Sci. Program. 14(3-4), 217–230(2006)