university of oklahoma
DESCRIPTION
University of Oklahoma. Network Infrastructure and National Lambda Rail. Why High Speed?. Moving data. Collaboration. What’s Our Strategy?. End-to-end “big picture” design. Constantly shifting target architecture. Consistent deployment methods. Supportable and sustainable resources. - PowerPoint PPT PresentationTRANSCRIPT
University of University of OklahomaOklahoma
Network InfrastructureNetwork Infrastructure
and National Lambda Railand National Lambda Rail
Why High Speed?Why High Speed?
Moving data.
Collaboration.
What’s Our Strategy?What’s Our Strategy?
End-to-end “big picture” design.Constantly shifting target architecture.Consistent deployment methods.Supportable and sustainable resources.
How Do We Design for How Do We Design for Today’s Needs and Today’s Needs and
Tomorrows Requirements?Tomorrows Requirements?
Cabling…Cabling…
Yesterday: Category 5Split-pair deployment for voice and dataCheapest vendorPoor performance for today’s demands
Cabling… (cont)Cabling… (cont)
Today: Category 6+Standardized on Krone TrueNetGigabit capableTrained and certified installation teamIssues with older installations still exist
Cabling…(cont)Cabling…(cont)
Tomorrow: Krone 10G10-Gigabit capablePurchasing new test equipmentDeployed at National Weather CenterUpgrade of older installations to 6+ or 10G
Fiber Optics…Fiber Optics…
Yesterday:Buy cheapPull to nearest buildingTerminate what you need
Fiber Optics… Fiber Optics… (cont)(cont)
Today:WDM capable fiberPull to geographic route nodeTerminate, test, and validateIssues with “old” fiber still exist
Fiber Optics… Fiber Optics… (cont)(cont)
Tomorrow:Alternate cable pathsLife-cycle replacementInspection and re-validation
Network Network Equipment…Equipment…
Yesterday:10Mb/s or 10/100Mb/s to desktop100Mb/s or Gigabit to the buildingBuy only what you need (no port growth)
Network Network Equipment… Equipment…
(cont)(cont)
Today:10/100/1000 to the desktopGigabit to the wiring closet25% expansion space budgeted on purchasePoE, per-port QoS, DHCP snooping, etc.
Network Network Equipment… Equipment…
(cont)(cont)
Tomorrow:10-Gig to the wiring closetNon-blocking switch backplanesEnhanced PoE, flow collection
Servers…Servers…
Yesterday:One application = one serverRun it on whatever can be foundNo consideration for network, power, HVAC, redundancy, or spare capacity
Servers… (cont)Servers… (cont)
Today:Virtualizing the environmentIntroducing VLANs to the server farmClustering and load balancingCo-locating to take advantage of economies of scale (HVAC, power, rack space)
Servers… (cont)Servers… (cont)
Tomorrow:Data center constructionInfiniband and iSCSI“Striping” applications across server platformsApp environment “looks like” a computing cluster (opportunities to align support)
ISP (OneNet)…ISP (OneNet)…
Yesterday:Two, dark-fiber Gigabit connectionsPoor relationship between ISP and OU
ISP… (cont)ISP… (cont)
Today:Excellent partnership between ISP & OU10-Gigabit BGP peer over DWDM10-Gig connection to NLRBGP peer points in disparate locations
ISP… (cont)ISP… (cont)
Tomorrow:Dual, 10-Gig peer… load sharedGigabit, FC, and 10-Gigabit “on-demand” anywhere on the optical networkAdditional ISP peering relationships to better support R&D tenants
WAN…WAN…
Yesterday:OC-12 to I2OC-12 and OC-3 to I1All co-located in the same facility
WAN… (cont)WAN… (cont)
Today:10-Gigabit (Chicago) and 1-Gigabit (Houston) “routed” connection to NLROC-12 to I2, with route preference to NLRMultiple I1 connectionsMultiple I1 peers in disparate locations
WAN… (cont)WAN… (cont)
Tomorrow:LEARN connection for redundant NLR and I2 connectivityDWDM back-haul extensions to allow NLR and I2 terminations "on-campus“
To what end???To what end???
“Condor” pool evolution “Real-time” data migrations and streaming Knowledge share Support share Ability to “spin-up” bandwidth anytime, anywhere (within
reason)
Questions?Questions?
[email protected]@ou.edu