enabling data intensive applications with advanced optical technologies
DESCRIPTION
Enabling Data Intensive Applications with Advanced Optical Technologies Joe Mambretti, Director, ( [email protected] ) International Center for Advanced Internet Research ( www.icair.org ) Director, Metropolitan Research and Education Network ( www.mren.org ) - PowerPoint PPT PresentationTRANSCRIPT
Enabling Data Intensive Applications with Advanced Optical Technologies
Joe Mambretti, Director, ([email protected])International Center for Advanced Internet Research (www.icair.org)
Director, Metropolitan Research and Education Network (www.mren.org)Partner, StarLight/STAR TAP, PI-OMNINet (www.icair.org/omninet)
iGRID 2005 University of California, San Diego
Sept. 26-30, 2005
• Creation and Early Implementation of Advanced Networking Technologies - The Next Generation Internet All Optical Networks, Terascale Networks
• Advanced Applications, Middleware, Large-Scale Infrastructure, NG Optical Networks and Testbeds, Public Policy Studies and Forums Related to NG Networks
Accelerating Leading Edge Innovation and Enhanced Global Communications through Advanced Internet Technologies, in Partnership with the Global Community
Introduction to iCAIR:
September 26-30, 2005University of California, San Diego
California Institute for Telecommunications and Information Technology [Cal-(IT)2]United States
World Of Tomorrow 2005
iGrid
2oo5T H E G L O B A L L A M B D A I N T E G R A T E D F A C I L I T Y
Co-Organizers: Tom DeFanti, Maxine Brown
Enabling Applications With Advanced Controllable Optical Transport
• Flexibility and Control (Not Simply ‘Bit Blasting”)• Providing Applications With Direct Control of Core Resources,
Including at Layer 1 and Layer 2, Nationally and Internationally• AMROEBA-EA Distributed Computational Astrophysics Modeling• DataWave: Ultra-High-Performance File Transfer Enabled by
Dynamic Lightpaths (Parallel Optical Data Transport)• LightForce: High-Performance Data Multicast Enabled by Dynamic
Lightpaths• Exploring Remote and Distributed Data Using Teraflows
International 10Gb Line Speed Security• Virtual Machine Turntable• Multiple OptIPuter Applications
Invisible Nodes, Elements,
Hierarchical,Centrally Controlled,
Fairly Static
Traditional Provider Services:Invisible, Static Resources,
Centralized Management
Distributed Device, Dynamic Services, Visible & Accessible Resources, Integrated As Required By Apps
Limited Functionality,Flexibility
Unlimited Functionality,Flexibility
LambdaGrid Control Plane Paradigm Shift
Ref: OptIPuter Backplane Project, UCLP
A Next Generation Architecture: Distributed Facility Enabling Many Types Network/Services
CommodityInternet
Environment: VO
Environment: International Gaming Fabric
Environment: Control Plane
TransLight Environment: Real Org
Environment: Sensors
Environment: Intelligent Power Grid Control
Environment: Real Org1
Environment: Large Scale System Control
Environment: Global App
Environment: Financial Org
Environment: Gov AgencyEnvironment: RFIDNet
Environment: Bio Org
Environment: Lab
Environment: Real Org2
SensorNetFinancialNet
HPCNet
MediaGridNet
R&DNet
RFIDNet
BioNet
PrivNet
GovNet1
MedNet
Resource ResourceResource Resource
Physical Processing Monitoring and Adjustment
HP-PPFS HP-APP2 HP-APP3 HP-APP4
VS VS VS VS
ODIN ServerCreates/Deletes
LPs, Status Inquiry
tcptcp
AccessPolicy (AAA)
Process Registration
Discovery/ResourceManager, Incl Link Groups
Addresses
Previously OGSA/OGSI, Soon OGSA/OASIS WSRF
ProcessInstantiationMonitoring
ConfDB
Lambda Routing:
Topology discovery, DB of physical links
Create new path, optimize path selection
Traffic engineering
Constraint-based routing
O-UNI interworking and control integration
Path selection, protection/restoration tool - GMPLS
DataPlane
System ManagerDiscoveryConfigCommunicateInterlinkStop/Start ModuleResource BalanceInterface Adjustments
GMPLS Tools (with CR-LDP)LP Signaling for I-NNIAttribute Designation, egUni, Bi directionalLP LabelingLink Group designations
Control Channel monitoring, physical fault detection, isolation, adjustment, connection validation etc
OSM
UNI-N
Architecture
Controller
Client Data Plane Server
IAS Server
Controller
ControllerControlle
r
New: Intelligent Application Signaling *Client Layer Control Plane: Communications Service LayerService Layer, Policy Based Access Control, Client MessageReceiver, Signal Transmission, Data Plane Controller, Data Plane Monitor
Optical Layer Control Plane
Client Layer Traffic Plane
Optical Layer – Switched Traffic (Data) PlaneMultiiservice: Unicast, BiDirectional, Multicast,Burst Switching
UNI
I-UNI
CICICI
* Also Control Signaling, et al
Optical Packet Router
Edge
Device Cluster
Edge
Device-Router
Optical Packet Router
Optical Packet Router
Optical Packet Router
Optical Packet Router
Multilayer Layer Control Planes and Optical Packet Switching
UbiquitousControlPlaneProvisioningWavelengthAssignmentWavelength Routing
Data Plane – Optical Transport
Optical Layer – Switched Lightpaths
Optical Routing
UbiquitousManagementPlaneAccessEngineeringRestorationPerformanceResource Use Audits
Multiwavelength Optical Amplifier
Multiwavelength Fiber
CSWASW
ASW
GE Links
GE Links *N*N
LAN PHYInterface, eg, 15xx nm 10GE serial
GE Links
GE Links
Optical,Monitors, for Wavelength Precision, etc.
Power Spectral Density Processor, Source + Measured PSD
DWDM Links
Multiple Per Fiber
Computer Clusters Each Node = 1GEMulti 10s, 100s, 1000s of Nodes
Multiple Optical Impairment Issues, Including Accumulations
Grid Clusters
Grid Clusters
Near Term Potential for 10 G Elec.
to BPLonger Term Potential for Driving Light to BP via Si,
New Polymers
OMNInet Network Configuration
10 GE
10 GE
To Ca*Net 4
StarLight
Photonic Node
S. Federal
Photonic Node
UIC NorthwesternPhotonic
Node10/100/GIGE
10/100/GIGE
10/100/GIGE
10/100/GIGE
10 GE
Optera5200
10Gb/sTSPR
Photonic Node
PP
8600
10 GEPP
8600
PP
8600
Optera5200
10Gb/sTSPR
10 GE
10 GE
Optera5200
10Gb/sTSPR
Optera5200
10Gb/sTSPR
1310 nm 10 GbEWAN PHY interfaces
10 GE
10 GE
PP
8600
Fiber
KM MI1* 35.3 22.02 10.3 6.43* 12.4 7.74 7.2 4.55 24.1 15.06 24.1 15.07* 24.9 15.58 6.7 4.29 5.3 3.3
NWUEN Link
Span Length
…CAMPUS
FIBER (16)
EVL/UICOM5200
LAC/UICOM5200
CAMPUSFIBER (4)
INITIALCONFIG:10 LAMBDA(all GIGE)
StarLightInterconnect
with otherresearchnetworks
10GE LAN PHY (Dec 03)
• 8x8x8 Scalable photonic switch• Trunk side – 10 G WDM
• OFA on all trunks
TECH/NU-EOM5200
CAMPUSFIBER (4)
INITIALCONFIG:10 LAMBDAS(ALL GIGE)
Optera Metro 5200 OFA
NWUEN-1
NWUEN-5
NWUEN-6NWUEN-2
NWUEN-3
NWUEN-4
NWUEN-8 NWUEN-9
NWUEN-7
Fiber in use
Fiber not in use
5200 OFA
5200 OFA
Optera 5200 OFA
5200 OFA
DOTClusters
DOT Sites, I-WIRE, and OMNInet
UIUC/NCSA
Starlight(NU-Chicago)Argonne
UChicagoIIT
UIC
Illinois Century NetworkJames R. Thompson CtrCity HallState of IL Bldg
4 pair
12 pair
4 pair
2 pair 2 pair
4 pair
18 pair
4 10 pair
12 pair
2 pair
Level(3)111 N. Canal
McLeodUSA151/155 N. MichiganDoral Plaza
Qwest455 N. Cityfront
UC Gleacher450 N. Cityfront
OMNInet
All DOT Links Here= GE
Not Yet Provisioned
Because of SLRenovation
This Cluster is at iCAIRNot Yet Part of Testbed
Chicago
OMNInet
• The OMNInet Testbed is Developing New Architectural Designs for Communication Services Based on Dynamically Provisioned Lightpaths, Supported by Agile Optical Networks
• This Research is Investigating New Architecture and Technologies for L1 – L2, While Also Exploring New Complementary L3 and L4 Methods
• This Research is Creating Fundamentally New Methods for Agile Optical Transport Enabling Migration From Legacy Architecture, Esp. Those Oriented to Centralized Management and Control
• The OMNInet Testbed Reduces Hierarchical Layers and Implements Highly Distributed Controls, e.g., Enabling Applications To Provision Lightpaths Dynamically
• Since 2001, the Testbed Has Had No SONET Components, OOO Switches at the Core Have Supported 24 Individually Addressable Lightpaths Among 4 Core Nodes
• Next - Integration of SONET-Less Optical Transport W/SONET Switching
• Through Various Research Projects, the Testbed has Been Extended to Sites Nationally and Internationally
OMNInet Key Themes and Issues
• A Key Goal Is Enhancing Service Layer Abstractions and Enabling Direct Manipulation of Core Optical Resources
• Major Improvements Over Centralized Control of Core Resources Via High Distributed Control
• Decentralization: Applications Can Directly Control Lightpaths• Advanced Dynamic Lightpath Provisioning Based on
Controllable, Deterministic Optical Networks• Increased Integration Between Edge and Core Infrastructure• Agile Solid State Components (e.g, CMOS-Based, PIC-Based)• Availability of Cost-Effective Fiber and DWDM Equipment
Provides for Highly Disruptive Price/Capability Ratios
Some Results
• Almost Lightpaths Had Minimal to No Packet Loss• In a Number of Tests, Large Scale Data Streams Were Transported For Many Hours
With No Packet Losses (Measured)• Measured Performance of Various Provisioning Processes• More Than 1000 Successful Lightpath Setup/Teardown Operations• No Optical Component Failures - Several Electronic Component Failures• Multiple Successful Demonstrations of Multiple New Service/Tech Capabilities
including New Provider Services, New Internal Optical Transport Capabilities• For Some Traffic, SONET/Routers Not Required (Would Have Been a Performance
Barrier), for Some Traffic, Multi-Service Approach• Exceptional Grid Application Results – Extremely High Performance• Have Created and Successfully Demonstrated Multi Times a Basic Control/Management
Plane Architectural Model, & Prototype Implementation• Demonstrated Utility of Dynamic Lightpath Switching to High Perf. Applications • Created “Optical Dynamic Intelligent Network” Service Layer Architecture• Created Lightpath Control Protocol• Demonstrated the Potential of Photonic Data Services, Wavelength SWng, L1 Sec• Demonstrated that Many Emerging Technologies Are Ready for Production (e.g.,
GMPLS Can be a Basis for Production Services)
OptIPuter
• The OptIPuter Meets Precise Needs of Applications vs. Today’s Environments
• Centralized Management and Infrastructure Restrictions • Compromised Applications
• The OptIPuter Enables Creation of Dynamic Distributed Virtual Computers • Assumes Ubiquitous Lightpaths
• Resources Include Optical Networking Components:
• Dynamic Lightpaths • Supported by Deterministic Next Generation Optical Networks
• For the OptIPuter, the “Network” is
• A Large Scale, Distributed System Bus and Distributed Control Architecture
• A“Backplane” Based on Dynamically Provisioned Datapaths
• The OptIPuter Addresses the Needs of Extremely Large Scale Sustained Data Flows
• Even Those Exhibiting Dynamic Unpredictable Behaviors• New Architecture, Methods and New Technologies at All Levels – L1 – L7
AMROEBA-EA
• The AMROEBA-EA Project Was Established to Investigate the Potential for Conducting Data Intensive ENZO Simulations On a Large Scale, Distributed Infrastructure Based on Dynamic Lightpath Provisioning.
• This Project Is Investigating New Mechanisms That Allow ENZO Processes to Utilize Additional Resources, Including Those at Remote Locations World-Wide.
AMROEBA-EA and AMR-ENZO
• AMROEBA-EA: An Adaptive Mesh Refinement Optical Enzo Backplane Architecture Enabled Application
• AMR-ENZO is used for Computational Astrophysics Modeling• AMR-ENZO Is Used To Create Many Types of Cosmological
Structure Formation Simulations• Originally Created By Greg Bryan Under Supervision of Michael
Norman While at NCSA• AMR-ENZO Has Been Parallelized Using the MPI Message-Passing
Library• AMR-ENZO and Can Run On Any Shared or Distributed Memory
Parallel Supercomputer or Compute Cluster• AMROEBA-EA: Shows How These Types of Applications Can
Utilize Distributed Computational Resources And Lightpath Switching
Visualization
Source Code: Mike Norman, UCSD
Source Code: Mike Norman, UCSD
Overall Networking Plan
Seattle
Chicago
San Diego (iGRID,UCSD)
Dedicated LightpathsNLRPacific WaveCENIC
PW/CENIC
University of AmsterdamStarLight
NetherLight 4 Dedicated Paths
Route B
NetherLightDedicated Lightpaths
San Diego (iGRID, UCSD)Seattle
4*1GpbsPaths +
One Control Channel
AMROEBA Network Topology
L2SW
L3 (GbE)L2SW
iGRID Conference
OME
UvA VanGogh Grid ClustersiCAIR DOT Grid ClustersiGRID Demonstartion
Control
L2SW
L2SW
L2SW
SURFNet/University of AmsterdamStarLight
L2SW
Visualization
Summary Optical Services: Baseline + 5 Years
2005 2006 2007 2008 2009 2010New Services
Abstractions
Enhanced Direct
Addressing
WS Multidomain Announcements
Multi New Services
Enhanced Reach Multiple Sites
National, Global
Application Optical LP
Integration
Optical Grid Net
And Instrument Services API-Op
Extensions to Additional Edge Devices
Extensions to Optical BPs
Optical Edge Services
Multiple Sites
National, Global
Access to Highly Distributed
Control Plane
Multi-Domain
Distributed
Control Access
Multi Site Access to Services, National, Global
Persistent Inter Domain Signaling
National, Global
Extension to
Additional Net
Elements
Persistence: Common Facilities
Deterministic
Paths (App as Service)
Close Integration
w/ App Signaling
Increased Attribute Parameters
Increased Adjustment Parameters
Performance Metrics and Methods
Enhanced Recovery
Restoration
Dynamic Lightpath Allocation
Multi-Domain
Alloc of DLP
Wavelength
Conversion
Extensions of DLP Peering
E2E DLP Large Scale Dist
Virtual Optical
Backplanes
Dedicated Switched
Lightpaths
Enhanced via WDM Mux
Demux
Enhanced Granularity
Increased Allocation Capacity Global
Increased Allocation Capacity US
Increased Allocation Capacity: Sites
SONET-Less
Optical Trans.
Integration With
SONET
Optical
SubChanneling
E2E Transport New Digital Frame Services
New E2E Framed Services
Multi-Service
Layer Integration
Integration with
Optical Services
Multi-Domain Integrated Servs
Distributed Management
Monitoring
Techniques
Analysis Techniques
Wavelength Routing
Selectable
Wavelength Routing
Multi-Domain
Wavelength Routing
Multi-layer
Integration
Multi-Services
Integration
Enhanced Recovery
Restoration
Summary Optical Technologies: Baseline + 5 Years 2005 2006 2007 2008 2009 2010O-APIs O-API Signaling App Specific
APIs
Variable APIs Multidomain
Signaling
E2E Signaling
Distributed Control Systems,
Multi-Domain
Integration with
Standard OADM
Integration with
ROADMs
Enhanced Granularity
Enhanced Addressibility
Enhanced Edge Integration
OOO Core
Switches
At Selected
Core Sites
At Selected
Core, Edge Sites
+ Experimental
Solid State OSWs
Solid State OSW
Deployment
Solid State (PIC)
At Core, Edge
O-UNIs O-UNI v2 O-UNI v3 Enhanced O-UNI
Signaling
At Selected
Core, Edge Sites
Deployment At Key Sites Global
Service Abstraction –GMPLS Integr.
Additional Sig. Integration
ODIN 2.0
Increased Transparency,
LayerElimination
Increas’d Integra. w/ ID/Obj.Dis ODIN3
Prototype Arch for App Specific
Serv Abstraction
Enhanced Architecture
ODIN v 3.n
SONET-Less Transport
New Types of Digital Framing
Metro Core Framing Architecture
LH Framing Architecture
Integration With
PICs
Integration with BPs
New Id, Object and Discovery Mechanisms
Integration of New Id, Obj, Dis w/ New Arch.
Integration
With Multiple
Integrated Serv.
Integration w/New Management Sys
Extensions to various TE Functions
Persistent at
Core, Edge Facilities
DWDM DWDM
CWDM
Integration with Edge Optics
Integration with BP Optics
Additional
MUX/DMUX
Increased Stream
Granularity
2D MEMs 3D LP Switches Experimental Opt Packet SWs
Prototype Deployed OPSW
NanoPhotonic
Devices
At Edge and Core Sites