the optiputer project - montana

35
The OptIPuter Project Prepared For: Montana State University Telcordia Contact: George Clapp 732-758-4346 [email protected] Monday, October 20, 2003 An SAIC Company

Upload: others

Post on 02-Oct-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The OptIPuter Project - Montana

The OptIPuter Project

Prepared For:Montana State University

Telcordia Contact:George [email protected], October 20, 2003

An SAIC Company

Page 2: The OptIPuter Project - Montana

MSU 10/20/2003 – 2

Overview

OptIPuter: combination of Optical, IP, and computerObjective: Distributed Virtual Computer– “…enable collaborating scientists to interactively explore massive

amounts of previously uncorrelated data by developing a radical new architecture…”

– “…’waste’ bandwidth and storage in order to conserve ‘scarce’ computing…”

– “…tightly-integrated cluster of computational, storage, instrumentation and visualization resources, linked over parallel optical networks…”

Page 3: The OptIPuter Project - Montana

MSU 10/20/2003 – 3

Fundamental Premise:Mismatch in Rates of Growth

Scientific American January 2001

Computation is the scarce resourceElectronic packet processing is the bottleneck

Page 4: The OptIPuter Project - Montana

MSU 10/20/2003 – 4

OptIPuter Organization

National Science Foundation (NSF) Information Technology Research (ITR) program

– Directorate for Computer and Information Science and Engineering (CISE)– $13.5M over 5 years– Began October 2002– First annual review on September 29, 2003– http://www.optiputer.net/events/9-29-03.html– Slides are credited to the researchers

Participants– University of California-San Diego (L. Smarr, A. Chien, G. Hidley, M.

Ellisman)– San Diego Supercomputer Center (P. Papadopoulos)– San Diego State University (E. Frost)– University of California-Irvine (R. Grossman)– University of Southern California (J. Bannister, C. Kesselman)– University of Illinois at Chicago (T. DeFanti, M. Brown, J. Leigh, O. Yu)– Northwestern University (J. Mambretti)– Apologies to those not mentioned

Page 5: The OptIPuter Project - Montana

MSU 10/20/2003 – 5

OptIPuter Motivation and Objectives

Motivation from e-Science– Grid– National Cyberinfrastructure

What are the Science Barriers We are Trying to Overcome?– Scientists need collaborative and visual interactions with gigabyte

data objects– Production, Shared Internet Limits Speed of File Transfers– Inadequate Campus Grid Infrastructure

Creating OptIPuter Laboratories at San Diego and ChicagoSystem Software: From Grid to LambdaGridEducation and Outreach

Page 6: The OptIPuter Project - Montana

MSU 10/20/2003 – 6

Gigabyte Data Objects Need Interactive Visualization

Extremely large data sources– E.g., microscopes and telescopes– Remote Sensing– Seismic or Medical Imaging– Supercomputer Simulations

Hundred-Million Pixel 2-D Images require new presentation techniques

Page 7: The OptIPuter Project - Montana

MSU 10/20/2003 – 7

On-Line Microscopes Create Very Large Biological Montage ImagesLaser Microscope– High Speed On-line

CapabilityUsing High-Res IBM Displays to Interactively Pan and Zoom Large Montage ImagesMontage Image Sizes Exceed 16x Highest Resolution Monitors– ~150 Million Pixels!

Source: David Lee, NCMIR, UCSD

IBM 9M Pixels

Page 8: The OptIPuter Project - Montana

MSU 10/20/2003 – 8

NSF’s EarthScopeRollout Over 14 Years Starting

With Existing Broadband Stations

Page 9: The OptIPuter Project - Montana

MSU 10/20/2003 – 9

Many Groups Are Experimenting with Tiled DisplaysFor 2D-Montage and 3D-Volumetric Viewing

PerspecTile Running JuxtaView Jason Leigh, EVL, UIC

www.llnl.gov/icc/sdd/img/images/pdf/Walls97.pdf

LCD Panels

Video Projectors

Each 3x5~20 Megapixels Total

Page 10: The OptIPuter Project - Montana

MSU 10/20/2003 – 10

Application Barrier Two:Shared Internet Makes Interactive Gigabyte Impossible

NASA Earth Observation System--– Over 100,000 Users Pull Data from Federated Repositories

Two Million Data Products Delivered per Year– 10-50 Mbps (May 2003) Throughput to Campuses

Typically Over Abilene From Goddard, Langley, or EROS– Best FTP with Direct Fiber OC-12: Goddard to U Maryland at 123

Mbps– UCSD to Goddard at 12.4 Mbps– Interactive Megabyte Possible, but Gigabyte is Impossible

Biomedical Informatics Research Network (BIRN) between UCSD and Boston-Similar Story– Lots of Specialized Networking Tuning Used

50-80 Mbps

Page 11: The OptIPuter Project - Montana

MSU 10/20/2003 – 11

Solution is to Use Dedicated1-10 Gigabit Lambdas

fc *λ=

(WDM)

Source: Steve Wallach, Chiaro Networks

Page 12: The OptIPuter Project - Montana

MSU 10/20/2003 – 12

The UCSD OptIPuter Deployment

½ Mile

SIO

SDSC

CRCA

Phys. Sci -Keck

SOM

JSOE Preuss

6th

College

SDSCAnnex

Node M

Earth Sciences

SDSC

Medicine

Engineering High School

To CENIC

Collocation

Source: Phil Papadopoulos, SDSC; Greg Hidley, Cal-(IT)2

Forged a New LevelOf Campus Collaboration

In Networking Infrastructure

SDSC Annex

JuniperT320

0.320 TbpsBackplaneBandwidth

20X

ChiaroEstara

6.4 TbpsBackplaneBandwidth

2 Miles0.01 ms

Details of UCSD and UIC

Campus InfrastructureTom DeFanti and Phil P

Talks

Page 13: The OptIPuter Project - Montana

MSU 10/20/2003 – 13

Chicago Metro-Scale OptIPuter

• Electronic Visualization Laboratory and Northwestern Univ use I-WIRE and OMNInet Fiber to connect clusters

• OMNInet is a 10GE Metro-Scale Testbed (Joe Mambretti)• I-WIRE is State of Illinois Initiative

16x1GigE

UIC

Chicago

Page 14: The OptIPuter Project - Montana

MSU 10/20/2003 – 14

Chicago Metro-Scale OptIPuter

Int’l GE, 10GE

Nat’l GE, 10GE

I-WIRE OC-192

16x1 GE 16x1 GE

16x1GE

OMNInet 10GEs

128x128 Calient

64x64GG

16-dual Xeon Cluster at NU

16-dual Xeon Cluster at UIC/EVL

All Processors also Connected by GE to Routers

Page 15: The OptIPuter Project - Montana

MSU 10/20/2003 – 15

Studying Both Routed and Switched Lambdas

OptIPuter evaluating both:– Routers

ChiaroJuniperCiscoForce10

– Optical SwitchesCalientGlimmerglass (GG)

UCSD Focusing on IP Routing InitiallyUIC/NU Focusing on Optical Switching initially

Page 16: The OptIPuter Project - Montana

MSU 10/20/2003 – 16

Chiaro IP RouterArchitectural Overview

Optical Phased Array (OPA) fabric enables large port count

Global arbitration provides guaranteed performance

Smart line cards NetworkNetwork

Proc.Proc.LineLineCardCard

NetworkNetworkProc.Proc.LineLineCardCard

GlobalGlobalArbitrationArbitration

Optical Electrical

ChiaroOPA

Fabric

NetworkNetworkProc.Proc.LineLineCardCard

NetworkNetworkProc.Proc.Line Line CardCard

Page 17: The OptIPuter Project - Montana

MSU 10/20/2003 – 17

InputOptical Fiber

WG #1 WG #128

••••••

UCSD Uses Chiaro Optical Phased ArrayMultiple Parallel Optical Waveguides

OutputOutputFibersFibers

Air Gap

Air Gap

GaAs Waveguides

Page 18: The OptIPuter Project - Montana

MSU 10/20/2003 – 18

UIC Calient DiamondWave Switches for StarLight and NetherLight

3D MEMS Micro-Electro-Mechanical Systems) structure128x128 at StarLight64x64 at NetherLight

Page 19: The OptIPuter Project - Montana

MSU 10/20/2003 – 19

Year Two Metro-Scale Experimental Network

Fiber link between UCSD and SDSU

Page 20: The OptIPuter Project - Montana

MSU 10/20/2003 – 20

Year Two State-Scale Experimental Network

UCSD SDSU

USC UCI

NASAAmes?

Source: CENIC

Identified at UC Irvine: • Cluster Lab• Campus Optical Fiber• CENIC Optical Connections• OptIPuter Up End of 2003

~400 Miles 2 ms

Chiaro

Page 21: The OptIPuter Project - Montana

MSU 10/20/2003 – 21

Year Two National-Scale Experimental Network

“National Lambda Rail”Chicago OptIPuter

StarLightNU, UIC

SoCalOptIPuter

USC, UCIUCSD, SDSU

2000 Miles 10 ms

=1000x Campus LatencySource: John Silvester, Dave Reese, Tom West-CENIC

Page 22: The OptIPuter Project - Montana

MSU 10/20/2003 – 22

An International-Scale OptIPuter

UKLight

CERN

NorthernLight

Page 23: The OptIPuter Project - Montana

MSU 10/20/2003 – 23

OptIPuter Optical Network Architecture

Joe MambrettiInternational Center for Advanced Internet Research

Northwestern University

Oliver YuUniversity of Illinois Chicago

September 2003

Page 24: The OptIPuter Project - Montana

MSU 10/20/2003 – 24

OptIPuter Optical Architecture Themes

The OptIPuter Shapes Itself to Meet the Precise Needs of the Application Requirements vs. Today’s Environments in which the Application must be Compromised to Conform to Infrastructure Restrictions The OptIPuter Enables the Creation of Dynamic Virtual Computers Instantiations, Releasing Resources after UseResources Include Optical Networking Components:

– Dynamic Lightpaths Supported by Next Generation Optical NetworksFor the OptIPuter, the “Network” is No Longer a Network, but a Large Scale, Distributed System Bus

– An Optical “Backplane” Based on Dynamically Provisioned Datapaths, Including Lightpaths

Also, the OptIPuter Uniquely Addresses the Needs of Extremely Large Scale Sustained Data Flows – Even Those Exhibiting Dynamic Unpredictable BehaviorsAchieving These Goals Requires New Architecture, Methods and NewTechnologies at All Levels – L1 – L7

Page 25: The OptIPuter Project - Montana

MSU 10/20/2003 – 25

Clusters

DynamicallyAllocatedLightpaths

Switch Fabrics

Apps

CONTROL

PLANE

OptIPuter Team Research Agenda

PhysicalMonitoring Multi-Leveled Architecture

Page 26: The OptIPuter Project - Montana

MSU 10/20/2003 – 26

Optical Network Architecture Issues

Middleware

Application Protocols

Data Plane Control Plane

Management Plane

Optical Transport

Performance Fault & ConfigurationSecurity

• Traffic Engineering• Topology Info Distribution• Lightpath Selection• Connection Signaling

• Resource Brokering• Advanced Scheduling

Page 27: The OptIPuter Project - Montana

MSU 10/20/2003 – 27

Single Domain Model

Control Plane (Signaling Network)

Data Plane (Optical Transport Network)

O-UNI (Signaling)

UNI (Transport)

Lay

er M

anag

emen

t

System Mgt.Management Plane

Page 28: The OptIPuter Project - Montana

MSU 10/20/2003 – 28

Multiple Domains Models

UNI (Signaling)

Control Plane

Data Plane

Control Plane

Data Plane

Control Plane

Data Plane

UNI (Transport)

Signaling Gateway Signaling Gateway

Page 29: The OptIPuter Project - Montana

MSU 10/20/2003 – 29

Receding Optical Core Cloud User-centric Dynamic Lightpath Provisioning

Optical CoreUser Domain

App.App.

User Domain

App.App.

User Domain

App.App.User Domain

App.App.

App.App.App.App.

Opening Internals of Optical Core

App.App.

App.App.

App.App.

App.App.

Page 30: The OptIPuter Project - Montana

MSU 10/20/2003 – 30

OptIPuter Control Plane Paradigm Shift

OptIPuter: Dynamic Services,

Visible & Accessible Resources,Integrated As Required By Apps

Traditional Provider Services:Invisible, Static Resources,

Centralized Management

Invisible Nodes,Elements,

Hierarchical,Centrally Controlled,

Fairly Static

Limited Functionality,Flexibility

Greatly Increased Functionality,Flexibility

Page 31: The OptIPuter Project - Montana

MSU 10/20/2003 – 31

Optical Layer Control Plane

Controller

Client Device

Client Controller

ControllerController

Controller

Controller

Client Layer Control Plane

Optical Layer Control Plane

UNI

I-UNI

CI

CICI

Management Plane

Client Layer Traffic Plane

Optical Layer – Switched Traffic Plane

Page 32: The OptIPuter Project - Montana

MSU 10/20/2003 – 32

OptIPuter and Inter-Domain Intelligent Signaling

SURFNET/NetherLight(Holland)

CA*net4(Canada) StarLight

IWIRE(Illinois)

ODIN

GMPLS Signaling

OMNInet(Chicago)

PIN

PINPIN

PIN

Gigabit ClusterGigabit Cluster

Source: Oliver Yu PIN (Photonic Inter-domain Negotiator)ODIN (Optical Dynamic Intelligent Signaling)Other Interdomain Projects at CANARIE, U of Amsterdam, et al.

Page 33: The OptIPuter Project - Montana

MSU 10/20/2003 – 33

PIN Architecture

Dispatcher

Routing Table

PIN 1

ODIN

Generic signaling message

Translator

Dispatcher

Routing Table

PIN 2

Dispatcher

Routing Table

PIN 3

User BUser A

Inter-domainRouting

GMPLSControl Plane 1

Intra-Domain Signaling

Control Plane 2 Control Plane 3

Intra-Domain Signaling

Domain 1 Domain 2 Domain 3

Inter-domainRouting

Inter-domainSignaling

Inter-domain Signaling

Translator Translator

Source: Oliver Yu

Page 34: The OptIPuter Project - Montana

MSU 10/20/2003 – 34

• 8x8x8λ Scalable photonic switch

• Trunk side – 10 G WDM

• OFA on all trunks

OMNInet Testbed Used for OptIPuter Experiments

10 GE10 GE

To Ca*Net 4

StarLight

Photonic Node

DataCom CtrPhotonic

Node

EVL UIC iCAIR, NorthwesternPhotonic

Node10/100/GIGE

10/100/GIGE

10/100/GIGE

10/100/GIGE

10 GEOptera5200

10Gb/sTSPR

Photonic Node

λ4

PP8600

10 GEPP

8600

PP8600

2

3

4

1

λλλ

λ

λλ

λ2

3

1

Optera5200

10Gb/sTSPR

10 GE10 GE

Optera5200

10Gb/sTSPR

2

3

4

1

λλλ

λ

Optera5200

10Gb/sTSPR

2

3

4

1

λλλ

λ

1310 nm 10 GbEWAN PHY interfaces

10 GE10 GE

PP8600

Fiber

KM MI1* 35.3 22.02 10.3 6.43* 12.4 7.74 7.2 4.55 24.1 15.06 24.1 15.07* 24.9 15.58 6.7 4.29 5.3 3.3

NWUEN Link

Span Length

…CAMPUSFIBER (16)

EVL/UICOM5200

LAC/UICOM5200

CAMPUSFIBER (4)

INITIALCONFIG:10 LAMBDA(all GIGE)

StarLightInterconnect

with otherresearchnetworks

10GE LAN PHY (Dec 03)

TECH/NU-EOM5200

CAMPUSFIBER (4)

INITIALCONFIG:10 LAMBDAS(ALL GIGE)

Optera Metro 5200 OFA

NWUEN-1

NWUEN-5

NWUEN-6NWUEN-2

NWUEN-3

NWUEN-4

NWUEN-8 NWUEN-9NWUEN-7

Fiber in useFiber not in use

5200 OFA

5200 OFA

Optera 5200 OFA

5200 OFA

DOT GridClusters

Research Partnership: Nortel, SBC, iCAIR, EVL, ANL

OMNInet is a SONET-Free Zone

Page 35: The OptIPuter Project - Montana

MSU 10/20/2003 – 35

OptIPuter and Related Standards Initiatives

P-P InternetworkingIntra-Domain InteroperabilityInter-Domain Interoperability

MPLSGMPLS

Link ManagementCAMP

TE CR LDPTE RSVP

IP-over-OpticalOBGP

IETFInternet Engineering Task Force

iG.ASONUNI

PNNIInter-Domain Interoperability

Intra-Domain Autonomous ProcessTE RSVP

TE CR LDPHierarchical Support

Architecture

ITUInt’l Telecomm Union

O-NNIO-UNI

OIFOptical Internetworking Forum

GHPN-RG

GGFGlobal Grid Forum

White Paper on Optical GridReleased During GGF9 in Oct