environmental-aware virtual data center...
TRANSCRIPT
1
2 Environmental-aware virtual data center network
3 Kim Khoa Nguyen a,⇑, Mohamed Cheriet a, Mathieu Lemay b, Victor Reijs c, Andrew Mackarel c,
4 Alin Pastrama d
5 aDepartment of Automated Manufacturing Engineering, Ecole de Technologie Superieure, University of Quebec, Montreal, Quebec, Canada6 b Inocybe Technologies Inc., Gatineau, Quebec, Canada7 cHEAnet Ltd., Dublin, Ireland8 dNORDUnet, Kastrup, Denmark
9
1 1a r t i c l e i n f o
12 Article history:13 Received 15 September 201114 Received in revised form 21 February 201215 Accepted 20 March 201216 Available online xxxx
17 Keywords:18 GreenStar Network19 Mantychore FP720 Green ICT21 Virtual data center22 Neutral carbon network23
2 4
a b s t r a c t
25Cloud computing services have recently become a ubiquitous service delivery model, cov-26ering a wide range of applications from personal file sharing to being an enterprise data27warehouse. Building green data center networks providing cloud computing services is
28an emerging trend in the Information and Communication Technology (ICT) industry,29because of Global Warming and the potential GHG emissions resulting from cloud services.30As one of the first worldwide initiatives provisioning ICT services entirely based on renew-31able energy such as solar, wind and hydroelectricity across Canada and around the world,32the GreenStar Network (GSN) was developed to dynamically transport user services to be33processed in data centers built in proximity to green energy sources, reducing Greenhouse34Gas (GHG) emissions of ICT equipments. Regarding the current approach, which focuses35mainly in reducing energy consumption at the micro-level through energy efficiency36improvements, the overall energy consumption will eventually increase due to the growing37demand from new services and users, resulting in an increase in GHG emissions. Based on
38the cooperation between Mantychore FP7 and the GSN, our approach is, therefore, much39broader and more appropriate because it focuses on GHG emission reductions at the40macro-level. This article presents some outcomes of our implementation of such a network41model, which spans multiple green nodes in Canada, Europe and the USA. The network42provides cloud computing services based on dynamic provision of network slices through43relocation of virtual data centers.44� 2012 Elsevier B.V. All rights reserved.
45
4647 1. Introduction
48 Nowadays, reducing greenhouse gas (GHG) emissions is
49 becoming one of the most challenging research topics in
50 Information and Communication Technology (ICT) because
51 of the alarming growth of indirect GHG emissions resulting
52 from the overwhelming use of ICT electrical devices [1].
53 The current approach when dealing with the ICT GHG
54 problem is improving energy efficiency, aimed at reducing
55 energy consumption at the micro level. Research projects
56 following this direction have focused on micro-processor
57design, computer design, power-on-demand architectures
58and virtual machine consolidation techniques. However,
59a micro-level energy efficiency approach will likely lead
60to an overall increase in energy consumption due to the
61Khazzoom–Brookes postulate (also known as Jevons para-
62dox) [2], which states that ‘‘energy efficiency improvements
63that, on the broadest considerations, are economically
64justified at the micro level, lead to higher levels of energy
65consumption at the macro level’’. Therefore, we believe that
66reducing GHG emissions at the macro level is a more
67appropriate solution. Large ICT companies, like Microsoft
68which consumes up to 27 MW of energy at any given time
69[1], have built their data centers near green power sources.
70Unfortunately, many computing centers are not so close to
1389-1286/$ - see front matter � 2012 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.comnet.2012.03.008
⇑ Corresponding author.
E-mail address: [email protected] (K.K. Nguyen).
Q1
Computer Networks xxx (2012) xxx–xxx
Contents lists available at SciVerse ScienceDirect
Computer Networks
journal homepage: www.elsevier .com/ locate/comnet
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008
71 green energy sources. Thus, a green energy distributed net-
72 work is an emerging technology, given that losses incurred
73 in energy transmission over power utility infrastructures
74 are much higher than those caused by data transmission,
75 which makes relocating a data center near a renewable en-
76 ergy source a more efficient solution than trying to bring
77 the energy to an existing location.
78 The GreenStar Network (GSN) project [3,4] is the first
79 worldwide initiative aimed at providing ICT services based
80 entirely on renewable energy sources such as solar, wind
81 and hydroelectricity across Canada and around the world.
82 The network can transport user service applications to be
83 processed in data centers built in proximity to green en-
84 ergy sources, thus GHG emissions of ICT equipments are
85 reduced to a minimum. Whilst energy efficiency tech-
86 niques are still encouraged at low-end client equipment
87 (e.g., such as hand-held devices, home PCs), the heaviest
88 computing services should be dedicated to data centers
89 powered completely by green energy. This is enabled
90 thanks to a large abundant reserve of natural green energy
91 resources in Canada, Europe and the USA. The carbon credit
92 saving that we refer to in this article is the emission due to
93 the operation of the network; the GHG emission during the
94 production phase of the equipments used in the network
95 and in the server farms is not considered since no special
96 equipment is deployed in the GSN.
97 In order to move virtualized data centers towards net-
98 work nodes powered by green energy sources distributed
99 in such a multi-domain network, particularly between Eur-
100 ope and North America domains, the GSN is based on a
101 flexible routing platform provided by the Mantychore FP7
102 project [5], which collaborates with the GSN project to
103 enhance the carbon footprint exchange standard for ICT
104 services. This collaboration enables research on the feasi-
105 bility of powering e-Infrastructures in multiple domains
106 worldwide with renewable energy sources. Management
107 and technical policies will be developed to leverage virtu-
108 alization, which helps to migrate virtual infrastructure
109 resources from one site to another based on power avail-
110 ability. This will facilitate use of renewable energy within
111 the GSN providing an Infrastructure as a Service (IaaS)
112 management tool. By integrating connectivity to parts of
113 the European National Research and Education Network
114 (NREN) infrastructures with the GSN network this devel-
115 ops competencies to understand how a set of green nodes
116 (where each one is powered by a different renewable en-
117 ergy source) could be integrated into an everyday network.
118 Energy considerations are taken before moving virtual
119 services without suffering connectivity interruptions. The
120 influence of physical location in that relocation is also ad-
121 dressed, such as weather prediction and estimation of solar
122 power generation.
123 The main objective of the GSN/Mantychore liaison is to
124 create a pilot study and a testbed environment from which
125 to derive best practices and guidelines to follow when
126 building low carbon networks. Core nodes are linked by
127 an underlying high speed optical network having up to
128 1000 Gbit/s bandwidth capacity provided by CANARIE.
129 Note that optical networks have a modest increase in
130 power consumption, especially with new 100 Gbit/s, in
131 comparison to electronic equipment such as routers and
132aggregators [6]. The migration of virtual data centers over
133network nodes is indeed a result of a convergence of server
134and network virtualizations as virtual infrastructure man-
135agement. The GSN as a network architecture is built with
136multiple layers, resulting in a large number of resources
137to be managed. Virtualized management has therefore
138been proposed for service delivery regardless of the phys-
139ical location of the infrastructure which is determined by
140resource providers. This allows complex underlying ser-
141vices to remain hidden inside the infrastructure provider.
142Resources are allocated according to user requirements;
143hence high utilization and optimization levels can be
144achieved. During the service, the user monitors and con-
145trols resources as if he was the owner, allowing the user
146to run their application in a virtual infrastructure powered
147by green energy sources.
148The remainder of this article is organized as follows. In
149Section 2, we present the connection plan of the GSN/
150Mantychore joint project as well as its green nodes. Section
1513 gives the physical architecture of data centers powered
152by renewable energy. Next, the virtualization solution of
153green data centers is provided in Section 4. Section 5 is
154dedicated to the cloud-based management solution that
155we implemented in the GSN. In Section 6, we formulate
156the optimization problem of renewable energy utilization
157in the network. Section 7 reports experimental and simula-
158tion results obtained when virtual data center service is
159hosted in the network. Finally, we draw conclusions and
160present future work.
1612. Provisioning of ICT services with renewable energy
162Rising energy costs and working in an austerity based
163environment which has dynamically changing business
164requirements has raised the focus of the NREN community
165to control some characteristics of these connectivity ser-
166vices, so that users can change some of the service charac-
167teristics without having to renegotiate with the service
168provider. In the European NREN community connectivity
169services are provisioned on a manual basis with some ef-
170fort now focusing towards automating the service setup
171and operation. Automation and monitoring of energy usage
172and emissions will be one of the new provisioning con-
173straints employed in emerging networks.
174The Mantychore FP7 project has evolved from previous
175research projects MANTICORE and MANTICORE II [5,7]. The
176initial MANTICORE project goal was to implement a proof
177of concept based on the idea that routers and an IP network
178can be setup as a Service (IPNaaS, as a management Layer 3
179network). MANTICORE II continued in the steps of its pre-
180decessor to implement stable and robust software while
181running trials on a range of network equipment.
182The Mantychore FP7 project allows the NRENs to
183provide a complete, flexible network service that offers
184research communities the ability to create an IP network
185under their control, where they can configure:
186(a) Layer 1, Optical links. Users will be able to get access
187control over optical devices like optical switches, to
188configure important properties of its cards and
2 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx–xxx
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008
189 ports. Mantychore integrates the Argia framework
190 [8] which provides complete control of optical
191 resources.
192 (b) Layer 2, Ethernet and MPLS. Users will be able to get
193 control over Ethernet and MPLS (Layer 2.5) switches
194 to configure different services. In this aspect, Manty-
195 chore will integrate the Ether project [9] and its
196 capabilities for the management of Ethernet and
197 MPLS resources.
198 (c) Layer 3, the Mantychore FP7 suite includes set of
199 features for: (i) Configuration and creation of virtual
200 networks, (ii) Configuration of physical interfaces,
201 (iii) Support of routing protocols, both internal
202 (RIP, OSPF) and external (BGP), (iv) Support of QoS
203 and firewall services, (v) Creation, modification and
204 deletion of resources (interfaces, routers) both phys-
205 ical and logical, and (vi) Support of IPv6. It allows the
206 configuration of IPv6 in interfaces, routing protocols,
207 networks.
208
209Fig. 1 shows the connection plan of the GSN. The Cana-
210dian section of the GSN has the largest deployment of six
211GSN nodes powered by sun, wind and hydroelectricity. It
212is connected to the European green nodes in Ireland (HEA-
213net), Iceland (NORDUnet), Spain (i2CAT), the Netherlands
214(SURFnet), and some other nodes in other parts of the
215world such as in China (WiCo), Egypt (Smart Village) and
216USA (ESNet).
217One of the key objectives of the liaison between Manty-
218chore FP7 and GSN projects is to enable renewable energy
219provisioning for NRENs. Building competency using renew-
220able energy resources is vital for any NREN with such an
221abundance of natural power generation resources at their
222backdoor and has been targeted as a potential major indus-
223try for the future. HEAnet in Ireland [10], where the share
224of electricity generated from renewable energy sources in
2252009 was 14.4%, has connected two GSN nodes via the
226GEANT Plus service and its own NREN network to a GSN
227solar powered node in the South East of Ireland and to a
Fig. 1. The GreenStar Network.
K.K. Nguyen et al. / Computer Networks xxx (2012) xxx–xxx 3
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008
228 wind powered grid supplied location in the North East of
229 the country. NORDUnet [11], which links Scandinavian
230 countries having the highest proportion of renewable en-
231 ergy sources in Europe, houses a GSN Node at a data center
232 in Reykjavık (Iceland) and also contributes to the GSN/
233 Mantychore controller interface development. In Spain
234 [12], where 12.5% of energy comes from renewable energy
235 sources (mostly solar and wind), i2CAT is leading the
236 Mantychore development as well as actively defining the
237 interface for GSN/Mantychore and they will setup a solar
238powered node in Lleida (Spain). The connected nodes and
239their source of energy are shown in Table 1.
240Recently, industrial projects have been built for gener-
241ating renewable energy to offset data center power con-
242sumption around the world [1,13]. However, they rather
243focus on a single data center. The GSN is characterized by
244the cooperation of multiple green data centers to reduce
245the overall GHG emission in a network. To the best of our
246knowledge, it is the first ICT carbon reduction research
247initiative of this type in the world. Prior to our work, theo-
248retical research has been done with respect to the optimi-
249zation of power consumption (including ‘‘brown’’ energy)
250in a network of data centers based on the redistribution
251of workload across nodes [14–16]. The GSN goes one step
252further with a real deployment of a renewable energy pow-
253ered network. In addition, our approach is to dynamically
254relocate virtual data centers through a cloud management
255middleware.
2563. Architecture of a green powered data center
257Fig. 2 illustrates the physical architecture of a hydro-
258electricity and two green nodes, one is powered by solar
259energy and the other is powered by wind. The solar pan-
260els are grouped in bundles of 9 or 10 panels, each panel
261generates a power of 220–230W. The wind turbine sys-
262tem is a 15 kW generator. After being accumulated in a
Table 1
List of connected GSN nodes.
Node Location Type of energy
ETS Montreal, QC, Canada Hydroelectricity
UQAM Montreal, QC, Canada Hydroelectricity
CRC Ottawa, ON, Canada Solar
Cybera Calgary, AB, Canada Solar
BastionHost Halifax, NS, Canada Wind
RackForce Kelowna, BC, Canada Hydroelectricity
CalIT2 San Diego, CA, USA Solar/DC power
NORDUnet Reykjavik, Iceland Geothermal
SURFnet Utrecht, The Netherlands Wind
HEAnet- DKIT Dundalk, Ireland Wind
HEAnet-EPA Wexford, Ireland Solar
i2Cat Barcelona, Spain Solar
WICO Shanghai, China Solar
Fig. 2. Architecture of green nodes (hydro, wind and solar types).
4 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx–xxx
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008
263 battery bank, electrical energy is treated by an inverter/
264 charger in order to produce an appropriate output current
265 for computing and networking devices. User applications
266 are running on multiple Dell PowerEdge R710 systems,
267 hosted by a rack mount structure in an outdoor climate-
268 controlled enclosure. The air conditioning and heating ele-
269 ments are powered by green energy at solar and wind
270 nodes; they are connected to the regular power grid at
271 hydro nodes. As servers are installed in the outdoor enclo-
272 sure, carbon assessment of a node can be highly accurate.
273 The PDUs (Power Distribution Unit), provided with power
274 monitoring features, measures electrical current and
275 voltage. Within each node, servers are linked by a local
276 network, which is then connected to the core network
277 through GE transceivers. Data flows are transferred
278 among GSN nodes over dedicated circuits (like light paths
279 or P2P links), tunnels over the Internet or logical IP
280 networks.
281 The Montreal GSN node plays a role of a manager (hub
282 node) that opportunistically sets up required connectivity
283 for Layer 1 and Layer 2 using dynamic services, then
284 pushes Virtual Machines (VMs) or software virtual routers
285 from the hub to a sun or wind node (spoke node) when
286 power is available. VMs will be pulled back to the hub
287 node when power dwindles. In such a case, the spoke node
288 may switch over to grid energy for running other services
289 if it is required. However, GSN services are powered en-
290 tirely by green energy. The VMs are used to run user appli-
291 cations, particularly heavy-computing services. Based on
292 this testbed network, experiments and research are per-
293 formed targeting cloud management algorithms and opti-
294 mization of the intermittently-available renewable energy
295 sources.
296 4. Virtualizing a data center
297 In the GSN, dynamic management of green data centers
298 is achieved through virtualization. All data center compo-
299 nents are virtualized and placed into a cloud which is then
300 controlled by a cloud manager. From the users’ point-of-
301 view, a virtual data center can be seen as an information
302 factory that runs 24/7 to deliver a sustained service (i.e.,
303 stream of data). It includes virtual machines (VM) linked
304 by a virtual network. The primary benefit of virtualization
305 technologies across different IT resources is to boost over-
306 all effectiveness while improving application service deliv-
307 ery (performance, availability, responsiveness, security) to
308 sustain business growth in an environmentally friendly
309 manner.
310 Fig. 3 gives a schematic overview of the virtualization
311 process of a simple data center which is powered by
312 renewable energy sources, i.e., wind, solar, and power grid,
313 including:
314 � Power generators, which produce electrical energy
315 from renewable energy, such as wind, solar or
316 geothermal.
317 � A charge controller controls the generated electricity
318 going to the battery bank, preventing overcharging
319 and overvoltage.
320� A battery bank stores energy and delivers power.
321� An inverter converts DC from the battery bank to an AC
322supply used by electrical devices. When the voltage of
323the battery bank is not enough, the inverter will buy
324current from the power grid. In the CRC (Ottawa) data
325centers, the battery voltage is maintained between
32624Vdc and 29Vdc, otherwise, electricity from the power
327grid will be bought (it is AC therefore is not converted
328by the inverter). If the generated power is too high,
329and it is not completely consumed by data center
330equipment, the surplus will be sold to the power grid.
331� A Power Distribution Unit (PDU) distributes electricity
332from the inverter to servers, switches, routers. There
333are infeeds and outlets in the PDU for measuring and
334controlling the receiving and outgoing currents. Cur-
335rently, the GSN software supports Raritan, Server Tech,
336Avocent, APC and Eaton PDUs.
337� Climate control equipment allows the temperature and
338humidity of the data center’s interior to be accurately
339controlled.
340� Servers and switches are used to build the data center’s
341network.
342
343Each device in the data center is virtualized by a soft-
344ware tool, and is then represented as a resource (e.g. PDU
345resource, network resource, etc.). These resource instances
346communicate with devices, parse commands and decide to
347perform appropriate actions. Communications with the de-
348vices can be done through Telnet, SSH or SNMP protocols.
349Some devices can be provided with the manufacture’s soft-
350ware. However, such software is usually designed to inter-
351act with human users (e.g., through a web interface). The
352advantage of the virtualization approach is that a resource
353can be used by other services, or resources, which enables
354auto-management processes.
355Fig. 4 shows an example of the resources developed for
356PDU and computing servers. Similar resources are devel-
357oped for power generators, climate controls, switches and
358routers. Server virtualization is able to control the physical
359server and hypervisors used on the server. The GSN sup-
360ports KVM and XEN hypervisors. The Facility Resource
361(Fig. 3) is not attached to a physical device. It is responsible
362for linking the Power Source, PDU and Climate resources in
363order to create a measurement data which is used to make
364decisions for data center management. Each power con-
365sumer or generator provides a metering function which al-
366lows the user to view the power consumed or produced by
367the device. The total power consumption of a data center is
368reported by the Facility Resource.
369In the current implementation, network devices are
370virtualized at three layers. At the physical layer, Argia
371[8] treats optical cross-connects as software objects and
372can provision and reconfigure optical lightpaths within a
373single domain or across multiple, independently managed
374domains. At Layer 2, Ether [9] is used which is similar in
375concept to Argia, with a focus on Ethernet and MPLS
376networks, Ether allows users to acquire ports on an Enter-
377prise Switch and manage VLANs or MPLS configurations
378on their own. At the network layer, a virtualized control
379plane created by the Mantychore FP7 project [5] is
K.K. Nguyen et al. / Computer Networks xxx (2012) xxx–xxx 5
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008
380 deployed which was specifically designed for IP networks
381 with an ability to define and configure physical and/or
382 logical IP networks.
383 In order to build a virtual data center, all Compute,
384 Network, Climate, PDU and Power Source resources are
385 hosted by a cloud service, which allows resources to be
386 always active and available to control devices. In many
387 data centers, i.e. [13], energy management and server man-
388 agement are separated functions. This imposes challenges
389in data center management, particularly when power
390sources are intermittent, because the server load cannot
391be adjusted to adapt to the power provision capacity, and
392the data center strictly depends on power providers. As
393power and server are managed in a unified manner, our
394proposed model allows a completely autonomous manage-
395ment of the data center.
396Our power management solution is designed with four
397key characteristics:
Fig. 4. Example of resources in the cloud and a subset of their commands.
Fig. 3. Green data center and virtualization.
6 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx–xxx
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008
398 � Auto-configuration: system is able to detect a new
399 device connecting to a data center. It will ask the
400 administrator to provide appropriate configuration of
401 the new device.
402 � Easy monitoring: comfortable easy access to real-time
403 information on energy consumption helping the admin-
404 istrator to make power regulation decisions.
405 � Remote control: allows online access to devices
406 enabling these appliances to be controlled remotely.
407 � Optimized planning: helps reduce GHG emission by
408 balancing workloads according to the capacity of serv-
409 ers and available energy sources in the entire network.
410
411 An example of how the power management function
412 interacts is as follows. The user wants to connect a new de-
413 vice (e.g., a server) to the data center. When this new ser-
414 ver is plugged into a PDU and turned on, the PDU resource
415 detects it by scanning the outlets and finds the load current
416 of the outlet. If there is no further information about the
417 device, the system considers it as a basic resource and as-
418 signs only a basic on/off command to the device. The user
419 will be asked to provide access information to the device
420 (e.g., username, password, protocol) and the type of device.
421 When the required information is provided, and a new
422 Compute resource has been identified, it will be exposed
423 to the cloud. If the device is non-intelligent (e.g. a light),
424 its consumption level is accounted in total consumption
425 of the data center, however, no particular action will be ta-
426 ken on this device (e.g., workload shared with other de-
427 vices through a migration when power dwindles).
428 Management of a network of virtual data centers is pro-
429 vided in the next section.
430 5. Management of a virtual data center network
431 From an operational point of view, the GreenStar Net-
432 work is controlled by a computing and energy manage-
433 ment middleware, as shown in Fig. 5, which includes two
434 distinct layers:
435 � Resource layer: contains the drivers of each physical
436 device. Each resource controls a device and offers ser-
437 vices to a higher layer. Basic functions for any resource
438 include start, stop, monitor, power control, report
439 device status and then run application. Compute
440 Resources control physical servers and VMs. Power
441 Source Resources manage energy generators (e.g., wind
442 turbines or solar panels). PDU Resources control the
443 Power Distribution Units (PDUs). Climate Resources
444 monitor the weather condition and humidity of the data
445 centers. A Facility Resource links Power Source, PDU
446 and Climate resources of a data center with Compute
447 resources. It determines the power consumed by each
448 server and sends notifications to managers when the
449 power or environmental condition changes.
450 � Management layer: includes a Controller and a set of
451 managers. Each manager is responsible for its individ-
452 ual resources. For example, the Cloud Manager manages
453 Compute resources, the Facility Manager manages the
454 Facility resources, and the Network Manager handles
455data center connectivity. The Controller is the brain of
456the network, which is responsible for determining the
457optimized location of each VM. It computes the action
458plans to be executed on each set of resources, and then
459orders the managers to perform them. Based on infor-
460mation provided by the Controller, these managers
461can execute relocation and migration tasks. The rela-
462tionship between the Controller and the associated
463managers can be regarded as the Controller/Forwarder
464connection in an IP router. The Controller keeps an
465overall view of the entire network; it computes the best
466location for each VM and updates a Resource Location
467Table (RLT). A ‘‘snapshot’’ of the RLT is provided to the
468managers. When there is a request for the creation of
469a new VM, a manager will consult the table to deter-
470mine the best location of the new VM. If there is a
471change in the network, e.g., when the power source of
472a data center dwindles, the controller re-computes best
473locations for VMs and updates the managers with the
474new RLT.
475
476The middleware is implemented based on the J2EE/
477OSGi platform, using components of the IaaS Framework
478[17]. It is designed in a modular manner so the GSN can
479easily be extended to cover different layers and technolo-
480gies. Through Web interfaces, users may determine GHG
481emission boundaries based on information providing VM
482power and energy sources, and then take actions to reduce
483GHG emissions. The project is therefore ISO 14064 compli-
484ant. Indeed, cloud management is not a new topic; how-
485ever, the IaaS Framework is developed for the GSN
486because the project requires an open platform converging
487server and network virtualizations. Whilst many cloud
488management solutions in the market focus particularly
489on computing resources, IaaS Framework components
490can be used to build network virtualized tools [6], allowing
491flexible setup of data flows among data centers. Indeed,
492virtual network management provided by cloud middle-
493ware, like OpenNebula [18] and OpenStack [19], is limited
494at the IP layer, while the IaaS Framework deals with net-
495work virtualization on all three network layers. The ability
496of incorporating third-party power control components is
497also an advantage of the IaaS Framework.
498In order to compute the best location for each VM, each
499host in the network is associated with a number of param-
500eters. Key parameters include the current power level and
501the available operating time, which can be determined
502respectively through a power monitor and weather
503measurement in addition to statistical information. The
504controller implements an optimization algorithm to maxi-
505mize the utilization of data centers within the period when
506renewable energy is available. In other words, we will try
507to push as many VMs as possible to the hosts powered
508by renewable energy. A number of constraints are also
509taken into account, such as:
510– Host capacity limitation: each VM requires a certain
511amount of memory and a number of CPU cycles. Thus,
512a physical server can host only a limited number of
513VMs.
K.K. Nguyen et al. / Computer Networks xxx (2012) xxx–xxx 7
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008
514 – Network limitation: each service requires a certain
515 amount of bandwidth. So, the number of VMs moved
516 to a data center is limited by the available network
517 bandwidth of the data center.
518 – Power source limitation. Each data center is powered by
519 a number of energy sources, which meets power
520 requirements of physical servers and accessories. When
521 the power dwindles, the load on physical servers should
522 be reduced by moving out the VMs.
523
524 The GreenStar Network is able to automatically make
525 the scheduling decision on dynamically migrating/consoli-
526 dating VMs among physical servers within datacenters to
527 meet the workload requirements meanwhile maximizing
528 renewable energy utilization, especially for performance-
529 sensitive applications. In the design of the GSN architec-
530 ture, several key issues are addressed including when to
531 trigger VM migrations, and how to select alternative phys-
532 ical machines to achieve optimized VM placement.
533 6. Environmental-aware migration of virtual data
534 centers
535 In the GSN, a virtual data center migration decision can
536 be made in the following situations:
537 – If power dwindles below a threshold which is not suffi-
538 cient to keep the data center operational. This event is
539 detected and reported by the Facility resource.
540 – If the climate condition is not suitable for the data cen-
541 ter (i.e., the temperature is too low or too high, or the
542 humidity level is too high). This event is detected and
543 reported by the Climate resource. The Climate resource
544is also able to import forecast information represented
545in XML format, which is used to plan migrations for a
546given period ahead.
547
548A migration involves four steps: (i) Setting up a new
549environment (i.e., in target data centers) for hosting the
550VMs with required configurations, (ii) Configuring the net-
551work connection, (iii) Moving VMs and their running state
552information through this high speed connection to the new
553locations, and (iv) Turning off computing resources at the
554original node. Note that the migration is implemented in
555a live manner based on support from hypervisors [20],
556meaning that a VM can be moved from a physical server
557to another while continuously running, without any
558noticeable effects from the point of view of the end users.
559During this procedure, the memory of the virtual machine
560(VM) is iteratively copied to the destination without stop-
561ping its execution. A delay of around 60–300 ms is re-
562quired to perform the final synchronization before the
563virtual machine begins executing at its final destination,
564providing an illusion of seamless migration.
565In our experiments with the online interactive applica-
566tion Geochronos [21], each VM migration requires 32Mbps
567bandwidth in order to keep the service live during the
568migration, thus a 10 Gbit/s link between two data centers
569can transport more than 300 VMs in parallel. Given that
570each VM occupies one processor and that each server has
571up to 16 processors, 20 servers can be moved in parallel.
572Generally, only one data center needs to be migrated at
573a given moment of time. However, the controller might be
574unable to migrate all VMs of a data center due to the over-
575capacity of servers in the network. In such a case, if there is
576no further requirement (e.g., user preferences on the VMs
577to be moved), the Controller will try to migrate as many
Fig. 5. The GreenStar Network middleware architecture.
8 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx–xxx
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008
578 VMs as possible, starting from the smallest VM (in terms of
579 memory capacity).
580 6.1. Optimization problem formulation
581 The key challenge of a mixed power management solu-
582 tion is to determine how long the electric current resource
583 can power the data center. We define an operating hour
584 (opHour) metric as the time that the data center can be
585 properly operational with a current power source. opHour
586 is calculated based on stored energy in the battery bank,
587 the system voltage and electric current. It can take two
588 values:
589 – opHour under current load (opHourcurrentload): the time
590 that the data center can be operational assuming no
591 additional load will be put on servers.
592 – opHour under maximum load (opHourmaximumload): the
593 time that the data center can be operational assuming
594 all servers run at their full capacity (memory and CPU).
595
596 The consumption of each server is determined through
597 the PDU which reports the voltage and current of the ser-
598 ver. The value of opHour is calculated as follows:599
opHourcurrentload ¼Emax � BatteryChargeState
Vout � Iout
ð1Þ
opHourmaximumload ¼Emax � BatteryChargeState
V out � Iout
ð2Þ601601
602 where Imax is the electrical current recorded by the PDU
603 outlet when the server is running at its maximal capacity
604 (reported by server benchmarking tools [22]). The maximal
605 capacity of server is reached when a heavy load application
606 is run.
607 The optimization problem of data center migration is
608 formulated as follows:
609 Given N VMs hosted by a data center, the opHour value
610 (op) of the data center is provided by its Facility Resource
611 using equations (1) and (2). When the opHour is lower than
612 a threshold T defined by administrator, the data center
613 needs to be relocated. Each VM Vi with capacity Ci = {Cik},
614 i.e., memory capacity and the number of CPUs, will be
615 moved to an alternate location. In the GSN, the threshold
616 T = 1 for all data centers.
617 There are M physical data centers powered by green
618 energy sources, that will be used to host the virtual data
619 center. Each green data center Dj is currently powered by
620 an energy source Sj having a green factor Ej. In the GSN,
621 the green factor of wind and solar sources is 100, of hydro
622 and geothermal is 95. The opHour of data center is opj, given
623 the current energy source. The data center Dj has Lj Com-
624 pute resources {Rjl}, each has available capacity (CPU and
625 memory) Cjl = {Cjlk} and is hostingHjl VMs {Vjlh} each VM Vjlh
626 has capacity Cjlh = {Cjlhk}. When a VM Vi is moved to Dj, the
627 new opHour of the data center is predicted as follows:628
opj ¼ opj � 1�P
k
ak �cik
PLj
l¼1cjlk
!
ð3Þ630630
631 where ak is specific constant determining power consump-
632 tion for CPU and memory access as described in [23]. The
633following matrix of binary variables represents the virtual
634data center relocation result:635
xijl ¼1 if V j is hosted by Rjl in Dj
0 otherwise
�
ð4Þ637637
638The objective function is to maximize the utilization of
639green energy; in other words, we try to keep the VM run-
640ning on a green data center as long as possible. So, the
641problem is to maximize:642
PN
i
PM
j
PLj
l
xijl � opj � 1�P
k
ak �cik
PLj
l¼1cjlk
!
� Ej ð5Þ644644
645Subject to:646
PN
i
PM
j
PLj
l
xijl ¼ N ð6Þ
xijl � Cik þPHjl
h¼1
Cjlhk 6 Cjlk8i; j; l; k ð7Þ
opj � 1�P
k
ak �cik
PLj
l¼1cjlk
!
> T 8i; j ð8Þ648648
649We assume that optical switches on the underlying
650CANARIE network are powered by a sustainable green
651power source (i.e., hydroelectricity). Also, note that the
652power consumption of optical lightpaths linking data cen-
653ters is very small compared to the consumption of data
654centers. Therefore, migrations do not increase the con-
655sumption of ‘‘brown’’ energy in the network. The power
656consumption of the underlying network is hence not in-
657cluded in the objective function.
658In reality, as the GSN is supported by a very highspeed
659underlying network and current nodes are tiny data cen-
660ters with few servers, the time required for migrating an
661entire node is in the order of minutes, whilst a full battery
662bank is able to power a node during a period of ten hours.
663Additional user-specific constraints, for example that VMs
664from a single virtual data center are required to be placed
665in a single physical data center, are not taken into account
666in the current network model.
667This formulation is referred to as the original mathe-
668matical optimization problem. The solution of the problem
669will show whether the virtual data center can be relocated
670and the optimized relocation scheme. If no solution is
671found, in other words, there are not sufficiently resources
672in the network to host the VMs, a notification will be sent
673to the system administrator who may then make a deci-
674sion to scale up existing data centers. Since the objective
675function is nonlinear and there are nonlinear constraints,
676the optimization model is a nonlinear programming prob-
677lem with binary variables. The problem can be considered
678as a bin packing problemwith variable bin sizes and prices,
679which is proven NP-hard. In our implementation, we apply
680a modification of the Best Fit Decreasing (BFD) algorithm
681[24] to solve the problem. Our heuristic is follows: As the
682goal is to maximize the green power utilization, data cen-
683ters are firstly sorted in decreasing order of their available
684power, namely. VMs are sorted in increasing order of their
685size, then will be placed into the data centers in the list,
686starting from the top. Given a relatively small number of
K.K. Nguyen et al. / Computer Networks xxx (2012) xxx–xxx 9
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008
687 resources in the network (i.e., order of few hundred), the
688 solution can be obtained in few minutes. Without the pro-
689 posed ordering heuristic, the gap between a best-fit and
690 the optimum solution is about 25% for 13 network nodes
691 [25].
692 7. Experimental results
693 We evaluate the time when a virtual data center in the
694 GSN/Mantychore can be viable without the power grid,
695 assuming that the processing power required by the data
696 center is equivalent to 48 processors. Weather condition
697 is also taken into account with a predefined threshold for
698 humidity level is 80%. We also assume the Power Usage
699 Effectiveness (PUE) of the data center is 1.5, meaning that
700 the total power used for data centers is 1.5 times of the
701 power effectively used for computing resources. In reality,
702 a lower PUE can be obtained by more advanced data center
703 architectures.
704 Fig. 6A and B shows respectively electricity generated
705 by a solar PV system installed at a data center at Ottawa
706 during a period of three months and the humidity level
707 of the data center. State ‘‘ON’’ in Fig. 5C indicates that the
708 data center is powered by solar energy. When the electric
709 current is too small or the solar PV charger is off due to
710 bad weather, the data center will be powered based on a
711 battery bank for a limited time; then it has to switch to
712 the power grid (state ‘‘OFF’’) if virtualization and migration
713 techniques are not present. Data center service fails (state
714 ‘‘FAIL’’) when the battery bank is empty or the humidity le-
715 vel is too high (>80%).
716 As shown in Fig. 6C, the data center has to switch fre-
717 quently from the solar PV to the power grid when the
718 autonomous power regulation feature is not implemented
719 as the weather gets worse during the last months of the
720 year. From mid-December to the end of January, electricity
721 generated by the solar PV is negligible, so the data center
722 has to be connected to the power grid permanently. The
723 failure of the data center is about 15% of the time due to
724 either humidity or power level.
725 The same workload and power pattern is imported to
726 the Controller in order to validate the performance of our
727 system when the data center is virtualized, as shown in
728 Fig. 6D. As the virtual data center is relocated to alternate
729 locations when the solar energy dwindles or the humidity
730 level rises above the threshold, the time it is in the ‘‘ON’’
731 state is much longer than in Fig. 6C. The zero failing time
732 was achieved because the network has never been running
733 at its full capacity. In other words, other data centers have
734 always enough space (memory and CPU) to receive the
735 VMs of the Ottawa center. Although the time when the vir-
736 tual data center is powered entirely by solar energy in-
737 creases significantly compared to Fig. 6C (e.g., over 80%),
738 from the second half of January, the data center still has
739 to be connected to the power grid (state ‘‘OFF’’) because
740 renewable energy is not sufficient to power all data centers
741 in the network in mid-winter.
742 Extensible experiments showed that a virtual data cen-
743 ter having 10 VMs with totally 40 GB of memory takes
744about five minutes to be relocated across nodes on the
745Canadian section of the GSN from the western to eastern
746coast. This time is almost doubled when VMs are moved
747to a node located in the southern USA or Europe due to a
748larger number of transit switches. The performance of
749migration is extremely low when VMs go to China, due
750to the lack of lightpath tunnels. It hence suggests that opti-
751cal connection is required for this kind of network. It is
752worth noting that the node in China is not permanently
753connected to the GSN, therefore it is not involved in our
754optimization problem. As GSN nodes are distributed across
755different time zones, we observed that VMs in solar nodes
756are moved from East (i.e., Europe) to West (i.e., North
757America) following the sunlight. When solar power is not
758sufficient for running VMs, wind, hydroelectricity and geo-
759thermal nodes will take over.
760Note that GSN is still an on-going research project, and
761the experimental results presented in this paper are ob-
762tained over a short period of time. Some remarks need to
763be added:
764– VMmigration would be costly, especially for bandwidth
765constrained or high latency networks. In reality, the
766GSN is deployed on top of a high speed optical network
767provided by CANARIE having up to 100 Gbps capacity. It
768is worth noting that CANARIE is one of the fastest net-
769works in the world [26]. As a research project, the cost
770of communication network is not yet taken into
771account, although it could be high for intercontinental
772connections. Indeed, our algorithm attempts to mini-
773mize the number of migrations in the network by keep-
774ing VMs running on data centers as long as possible.
775Thus, network resources used for virtual data center
776relocations are also minimized. However, along with
777the overwhelming utilization of WDM networks, espe-
778cially with the next generation highspeed networks
779such as Internet2 [27], GENI [28] or GÉANT [12], we
780believe that the portion of networking in the overall
781cost model will be reduced, making the green data cen-
782ter network realistic.
783– The size of the GSN nodes is very small compared to real
784corporate data centers. Thus, the time for relocating a
785virtual data center could be underestimated in our
786research. However, some recent research suggested that
787small data centers would be more profitable than large-
788scale ones [29,30]. The GSN model would therefore be
789appropriate for this kind of data centers. In addition to
790the issue of available renewable power, mega-scale data
791centers will result in a very long migration time when
792an entire data center is relocated. Also, resources used
793for migrations increase according to the size of data
794center. As migrations do not generate user revenue, a
795network of large scale data centers would hence
796become unrealistic.
797– All GSN nodes are located in northern countries. Thus, it
798is hard to keep the network operational in mid-winter
799periods with solar energy. Our observation showed that
800the hub could easily be overloaded. A possible solution
801is to partition the network into multiple sections, each
802having a hub powered by sustainable energy sources.
10 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx–xxx
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008
Fig. 6. Power, humidity level and data center state from October 12, 2010 to January 26, 2011.
K.K. Nguyen et al. / Computer Networks xxx (2012) xxx–xxx 11
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008
803 Each hub will be responsible for maintaining service of
804 one section. For example, the GSN may have two hubs,
805 one in North America and the other in Europe.
806
807 8. Conclusion
808 In this paper, we have presented a prototype of a Future
809 Internet powered only by green energy sources. As a result
810 of the cooperation between Europe and North America
811 researchers, the GreenStar Network is a promising model
812 to deal with GHG reporting and carbon tax issues for large
813 ICT organizations. Based on the Mantychore FP7 project, a
814 number of techniques have been developed in order to
815 provision renewable energy for ICT services worldwide.
816 Virtualization techniques are shown to be the most appro-
817 priate solution to manage such a network and to migrate
818 virtual data centers following green energy source avail-
819 ability, such as solar and wind.
820 Our future work includes research on the quality of
821 services hosted by the GSN and a scalable resource
822 management.
823 Acknowledgments
824 The authors thank all partners for their contribution in
825 the GSN and Mantychore FP7 projects. The GSN project is
826 financially supported by CANARIE Inc.
827 References
828 [1] The Climate Group, SMART2020: Enabling the low carbon economy829 in the information age, Report on behalf of the Global eSustainability830 Initiative (GeSI), 2008.831 [2] HD. Saunders, The Khazzoom–Brookes postulate and neoclassical832 growth, Energy Journal 13 (4) (1992) 130–148.833 [3] The GreenStar Network Project. <http://www.greenstarnetwork.834 com/>.835 [4] C. Despins, F. Labeau, T. Le Ngoc, R. Labelle, M. Cheriet, C. Thibeault,836 F. Gagnon, A. Leon-Garcia, O. Cherkaoui, B.St. Arnaud, J. Mcneill, Y.837 Lemieux, M. Lemay, Leveraging green communications for carbon838 emission reductions: techniques, testbeds and emerging carbon839 footprint standards, IEEE Communications Magazine 49 (8) (2011)840 101–109.841 [5] E. Grasa, X. Hesselbach, S. Figuerola, V. Reijs, D. Wilson, J.M. Uzé, L.842 Fischer, T. de Miguel, The MANTICORE project: providing users with843 a logical IP network service, in: TERENA Networking Conference, 5/844 2008.845 [6] S. Figuerola, M. Lemay, V. Reijs, M. Savoie, B.St. Arnaud, Converged846 optical network infrastructures in support of future internet and grid847 services using IaaS to reduce GHG emissions, Journal of Lightwave848 Technology 27 (12) (2009) 1941–1946.849 [7] E. Grasa, S. Figuerola, X. Barrera, et al., MANTICORE II: IP network as a850 service pilots at HEAnet, NORDUnet and RedIRIS, in: TERENA851 Networking Conference, 6/2010.852 [8] E. Grasa, S. Figuerola, A. Forns, G. Junyent, J. Mambretti, Extending853 the Argia software with a dynamic optical multicast service to854 support high performance digital media, Optical Switching and855 Networking 6 (2) (2009) 120–128.856 [9] S. Figuerola, M. Lemay, Infrastructure services for optical networks857 [Invited], Journal of Optical Communications and Networking 1 (2)858 (2009) A247–A257.859 [10] HEAnet website. <http://www.heanet.ie/>.860 [11] NORDUnet website. <http://www.nordu.net>.861 [12] J. Moth, G. Kramer, GN3 Study of Environmental Impact Inventory of862 Greenhouse Gas Emissions and Removals – NORDUnet, GÉANT863 Technical Report, 9/2010.864 [13] P. Thibodeau, Wind power data center project planned in urban area,865 2008. <http://www.computerworld.com/>.866 [14] L. Zhenhua, L. Minghong, A. Wierman, S.H. Low, L.L.H. Andrew,867 Greening geographic load balancing, Proceedings of ACM868 SIGMETRICS (2011) 233–244.
869[15] S.K. Garg, C.S. Yeo, A. Anandasivam, R. Buyya, Environment-870conscious scheduling of HPC applications on distributed cloud-871oriented data centers, Journal of Parallel and Distributed Computing872(JPDC) 71 (6) (2011) 732–749.873[16] A. Qureshi, R. Weber, H. Balakrishnan, J. Guttag, B. Maggs, Cutting874the electric bill for internet-scale systems, ACM Computer875Communication Review 39 (4) (2009) 123–134.876[17] M. Lemay, K.-K. Nguyen, B. St Arnaud, M. Cheriet, Convergence of877cloud computing and network virtualization: towards a neutral878carbon network, IEEE Internet Computing Magazine (2011).879ISSN:1089-7801.880[18] D. Milojicic, I.M. Llorente, R.S. Ruben, OpenNebula: a cloud881management tool, IEEE Internet Computing Magazine 15 (2)882(2011) 11–14.883[19] OpenStack.org, OpenStack Open Source Cloud Computing Software.884<http://openstack.org>.885[20] M. Nelson, B.H. Lim, G. Hutchins, Fast transparent migration for886virtual machines, in: Proceedings of USENIX Annual Technical887Conference, 2005, pp. 391–394.888[21] C. Kiddle, GeoChronos: a platform for earth observation scientists,889OpenGridForum 28 (3) (2010).890[22] V. Makhija, B. Herndon, P. Smith, L. Roderick, E. Zamost, J. Anderson,891VMmark: a scalable benchmark for virtualized systems, Technical892Report TR 2006-002, VMware, 2006.893[23] A. Kansal, F. Zhao, J. Liu, N. Kothari, Virtual machine power metering894and provisioning, in: Proceedings of the 1st ACM Symposium on895Cloud Computing, 2010, pp. 39–50.896[24] V.V. Vazirani, Approximation algorithms, Springer, 2003.897[25] D. György, The tight bound of first fit decreasing bin-packing898algorithm is FFD(I) 6 (11/9)OPT(I) + 6/9, Combinatorics,899Algorithms, Probabilistic and Experimental Methodologies 4614900(2007) 1–11.901[26] The British Broadcasting Corporation (BBC), Scientists Break World902Record for Data Transfer Speeds, BBC News Technology, 2011.903[27] R. Summerhill, The new Internet2 network, in: 6th GLIF Meeting,9042006.905[28] J.S. Turner, A proposed architecture for the GENI backbone platform,906in: Proceedings of ACM/IEEE Symposium ANCS, 2006.907[29] V. Valancius, N. Laoutaris, L. Massoulié, C. Diot, P. Rodriguez,908Greening the internet with nano data centers, Proceedings of909CoNEXT ’09 (2009) 37–48.910[30] K. Church, A. Greenberg, J. Hamilton, On delivering embarrassingly911distributed cloud services, Proceedings of HotNets-VII (2008).
912
914914
915Kim Khoa Nguyen is Research Fellow at the916Automation Engineering Department of the917École de Technologie Supérieure (University918of Quebec). He is key architect of the Green-919Star Network project and a member of the920Synchromedia Consortium. Since 2008 he has921been with the Optimization of Communica-922tion Networks Research Laboratory at Con-923cordia University. His research includes green924ICT, cloud computing, smart grid, router925architectures and wireless networks. He holds926a PhD in Electrical and Computer Engineering927from Concordia University.928
930930
931Mohamed Cheriet is Full Professor in the932Automation Engineering Department at the933École de Technologie Supérieure (University934of Quebec). He is co-founder of the Laboratory935for Imagery, Vision and Artificial Intelligence936(LIVIA), and founder and director of the Syn-937chromedia Consortium. Dr. Cheriet serves on938the editorial boards of Pattern Recognition939(PR), the International Journal of Document940Analysis and Recognition (IJDAR), and the941International Journal of Pattern Recognition942and Artificial Intelligence (IJPRAI). He is a943member of IAPR, a senior member of the IEEE and the Founder and Pre-944mier Chair of the IEEE Montreal Chapter of Computational Intelligent945Systems (CIS).946
12 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx–xxx
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008
948948
949 Mathieu Lemay holds a degree in electrical950 engineering from the École de Technologies951 Supérieure (2005) and a master’s in optical952 networks (2007). He is currently a Ph.D. can-953 didate at the Synchromedia Consortium of954 ETS. He is the Founder, President, and CEO of955 Inocybe Technologies Inc. He is currently956 involved in Green IT and he is leading the IaaS957 Framework Open Source initiative. His main958 research themes are virtualization, network959 segmentation, service-oriented architectures960 and distributed systems.961
963963
964 Victor Reijs After studying at the University965 of Twente in the Netherlands, Victor Reijs966 worked for KPN Telecom Research and SURF-967 net. He was involved in CLNS/TUBA (an earlier968 alternative for IPv6). Experience was gained969 with X.25 and ATM in a national and inter-970 national environment. His last activity at971 SURFnet was the tender for SURFnet5 (a step972 towards optical networking). He is currently973 managing the Network Development team of974 HEAnet and is actively involved in interna-975 tional activities such as GN3, FEDERICA and976 Mantychore (IP Networks as a Service), as well as (optical) networking,977 point-to-point links, virtualization and monitoring in general.978
980980
981Andrew Mackarel is currently a Project982Manager in HEAnet’s NetDev Group for the983past 5 years. His work concerns planning and984deployment of large networking projects and985introduction of new services such as Band-986width on Demand. Previously Andrew has987worked for Bells Labs and Lucent Technolo-988gies as Test Manager in their Optical Net-989working Group. He has worked as a Principal990Test Engineer with Lucent and Ascend Com-991munications and Stratus Computers. He has992over 30 years work experience in the Tele-993communications and Computing Industries.994
996996
997Alin Pastrama has been working for998NORDUnet as a NOC Engineer since 2010 and999has since been involved in a number of1000research and deployment projects, both1001internal and external, such as Mantychore FP71002and GN3 Bandwidth-on-Demand. He has a1003M.Sc. degree in Communication Systems from1004the Royal Institute of Technology (KTH) in1005Sweden.1006
K.K. Nguyen et al. / Computer Networks xxx (2012) xxx–xxx 13
COMPNW 4730 No. of Pages 14, Model 3G
3 April 2012
Please cite this article in press as: K.K. Nguyen et al., Environmental-aware virtual data center network, Comput. Netw. (2012), http://
dx.doi.org/10.1016/j.comnet.2012.03.008