gec17 using exogeni
DESCRIPTION
GEC17 Using ExoGENI. Ilya Baldin ibaldin@ renci.org. ExoGENI Overview. Testbed. 14 GPO-funded racks built by IBM Partnership between RENCI, Duke and IBM IBM x3650 M4 servers (X-series 2U) 1x146GB 10K SAS hard drive +1x500GB secondary drive 48G RAM 1333Mhz Dual-socket 8-core CPU - PowerPoint PPT PresentationTRANSCRIPT
![Page 2: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/2.jpg)
2
ExoGENI Overview
![Page 3: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/3.jpg)
3
14 GPO-funded racks built by IBM◦ Partnership between RENCI, Duke and IBM◦ IBM x3650 M4 servers (X-series 2U)
1x146GB 10K SAS hard drive +1x500GB secondary drive 48G RAM 1333Mhz Dual-socket 8-core CPU Dual 1Gbps adapter (management network) 10G dual-port Chelseo adapter (dataplane)
◦ BNT 8264 10G/40G OpenFlow switch◦ DS3512 6TB sliverable storage
iSCSI interface for head node image storage as well as experimenter slivering
◦ Cisco(UCS-B) and Dell configuration also exist Each rack is a small networked cloud
◦ OpenStack-based with NEuca extensions◦ xCAT for baremetal node provisioning
http://wiki.exogeni.net
Testbed
![Page 4: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/4.jpg)
4
ExoGENI is a collection of off-the shelf institutional clouds◦ With a GENI federation on top◦ xCAT – IBM product◦ OpenStack- RedHat product
Operators decide how much capacity to delegate to GENI and how much to retain for yourself
Familiar industry-standard interfaces (EC2) GENI Interface
◦ Mostly does what GENI experimenters expect
ExoGENI
![Page 5: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/5.jpg)
ExoGENI at a glance
![Page 6: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/6.jpg)
6
Rack Software Stack
![Page 7: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/7.jpg)
Deployment structure
![Page 8: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/8.jpg)
An ExoGENI cloud “rack site”
![Page 9: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/9.jpg)
11
ExoGENI racks are separate aggregates but also act as a single aggregate◦ Transparent stitching of resources from multiple
racks ExoGENI is designed to bridge distributed
experimentation, computational sciences and Big Data◦ Already running HPC workflows linked to OSG and
national supercomputers◦ Newly introduced support for storage slivering◦ Strong performance isolation is one of key goals
ExoGENI unique features
![Page 10: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/10.jpg)
12
GENI tools: Flack, GENI Portal, omni◦ Give access to common GENI capabilities◦ Also mostly compatible with
ExoGENI native stitching ExoGENI automated resource binding
ExoGENI-specific tools: Flukes◦ Accepts GENI credentials◦ Access to ExoGENI-specific features
Elastic Cluster slices Storage provisioning Stitching to campus infrastructure
ExoGENI experimenter tools
![Page 11: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/11.jpg)
Presentation title goes here 13
ExoGENI activities
![Page 12: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/12.jpg)
14
Compute nodes◦ Up to 100 VMs in each full rack◦ A few (2) bare-metal nodes◦ BYOI (Bring Your Own Image)
True Layer 2 slice topologies can be created ◦ Within individual racks ◦ Between racks◦ With automatic and user-specified resource binding and slice
topology embedding◦ Stitching across I2, ESnet, NLR, regional providers. Dynamic
wherever possible OpenFlow experimentation
◦ Within racks◦ Between racks◦ Include OpenFlow overlays in NLR (and I2)◦ On-ramp to campus OpenFlow network (if available)
Experimenters are allowed and encouraged to use their own virtual appliance images
Since Dec 2012◦ 2500+ slices
Experimentation
![Page 13: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/13.jpg)
Virtual network exchange
Virtual colocampus net to circuit fabric
Multi-homedcloud hosts
with network control
Topology embedding and stitching
Computed embedding
Workflows, services,
etc.
![Page 14: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/14.jpg)
Slice half-way around the world ExoGENI rack in Sydney, Australia Multiple VLAN tags on a pinned path from Sydney to
LA Internet2/OSCARS ORCA-provisioned dynamic circuit
◦ LA, Chicago NSI statically pinned segment with multiple VLAN
tags◦ Chicago, NY, Amsterdam◦ Planning to add dynamic NSI interface
ExoGENI rack in Amsterdam◦ ~14000 miles◦ 120ms delay
![Page 15: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/15.jpg)
17
Strong isolation is the goal Compute instances are
KVM based and get a dedicated number of cores
VLANs are the basis of connectivity◦ VLANs can be best effort or
bandwidth-provisioned (within and between racks)
ExoGENI slice isolation
![Page 16: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/16.jpg)
Scientific Workflows
Workflow Management Systems◦ Pegasus, Custom scripts, etc.
Lack of tools to integrate with dynamic infrastructures◦ Orchestrate the infrastructure in response to application◦ Integrate data movement with workflows for optimized
performance◦ Manage application in response to infrastructure
Scenarios◦ Computational with varying demands◦ Data-driven with large static data-set(s)◦ Data-driven with large amount of input/output data
![Page 17: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/17.jpg)
Scientific Workflows
![Page 18: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/18.jpg)
Dynamic Workflow Steps (Computational)
![Page 19: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/19.jpg)
21
Workflow slices• 462,969 condor jobs since using the on-ramp to engage-submit3 (OSG)
![Page 20: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/20.jpg)
Dynamic Workflow Steps (On-Ramp)
![Page 21: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/21.jpg)
Dynamic Workflow Steps (Data-driven)
![Page 22: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/22.jpg)
Hardware-in-the loop slices•Hardware-in-the-Loop Facility Using RTDS & PMUs (FREEDM Center, NCSU)
![Page 23: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/23.jpg)
RENCI Data Center
Experimental PMU Data from RTDS
0 500 1000 1500 2000 2500 3000 35000.98
0.99
1
1.01
1.02
1.03
1.04
Time in seconds (sec.) with origin at: 12:38:00 EST
Vol
tage
Mag
nitu
de (p
u)
PMU Voltage Magnitudes in AEP - Richview Event
JF KR MF RK OR OS
60 65 70 75 80 85 90-0.02
-0.015
-0.01
-0.005
0
0.005
0.01
0.015
0.02
Time (sec)
Fast
Osc
illatio
ns (p
u)
Bus 1Bus 2Midpoint
800 810 820 830 840 850 860 8709.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
Ang
le d
iffer
ence
(deg
)
Time in seconds starting at:04-Aug-2007 16:30:00
tva mac
PMU Measurements
1 #
112
11
ModeInterarea
aa
aa
sss
2 #
222
22
ModeInterarea
aa
aa
sss
3 #
332
33
ModeInterarea
aa
aa
sss
Zero/First Order Hold
2 #
222
22
ModeInterarea
i iaia
iaia
sss
Intra-clusterVirtualization
PoP at UNC Chapel Hill
PoP at Duke University
Cluster 1
Cluster 2Cluster 3
NC State PoP
Distributed Execution of Time-critical Synchrophasor Applications
Fiber optic network for PMU data communication using IEEE C37.118
New GENI-WAMS Testbed
Latency & Processing Delays
Packet Loss
Network Jitters
Cyber-security Man-in-middle attacks
Aranya Chakrabortty, Aaron Hurz (NCSU)Yufeng Xin (RENCI/UNC)
![Page 24: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/24.jpg)
![Page 25: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/25.jpg)
![Page 26: GEC17 Using ExoGENI](https://reader035.vdocument.in/reader035/viewer/2022062216/568164c5550346895dd6dc0b/html5/thumbnails/26.jpg)
28
http://www.exogeni.net◦ ExoBlog http://www.exogeni.net/category/exoblog/
http://wiki.exogeni.net
Thank you!