geni workshop on layer 2/sdn campus deployment: a report

18
GENI Workshop on Layer 2/SDN Campus Deployment: A Report of the Meeting July 7, 2011 – Boston Marriott Table of Contents 1. Background ......................................................................................................................................................... 2 2. Definitions........................................................................................................................................................... 2 3. GENI Concepts ....................................................................................................................................................3 Figure 1: Network Locations ............................................................................................................................. 4 4. Benefits of Software-Defined Networking ..........................................................................................................5 Figure 2: Open Interface to Hardware (OpenFlow)........................................................................................... 6 5. U.S. Ignite ............................................................................................................................................................ 7 6. GENI Program Solicitation Status........................................................................................................................ 7 7. Discussion, Issues, and Comments ..................................................................................................................... 7 8. Action Items ........................................................................................................................................................ 8 9. OpenFlow Experiences .......................................................................................................................................9 Clemson University .........................................................................................................................................9 A. Figure 3: Clemson OpenFlow Deployment ........................................................................................................9 Georgia Institute of Technology ...................................................................................................................... 9 B. Figure 4: GT-RNOC OpenFlow Testbed ...........................................................................................................10 Stanford University .......................................................................................................................................10 C. Indiana University .........................................................................................................................................11 D. Open Science Grid .........................................................................................................................................11 E. 10. WiMAX Experiences ..........................................................................................................................................12 University of Colorado ..................................................................................................................................12 A. Figure 5: Throughput and Distance .................................................................................................................12 Rutgers University .........................................................................................................................................13 B. Figure 6: Rutgers Core Network ...................................................................................................................... 13 Appendix A: Additional Resources .......................................................................................................................... 14 Appendix B: Meeting Agenda .................................................................................................................................15 Appendix C: Attendees ...........................................................................................................................................16 Appendix D: Contacts for Additional Questions and Comments ............................................................................18

Upload: others

Post on 12-Feb-2022

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop on Layer 2/SDN Campus Deployment: A Report of the Meeting

July 7, 2011 – Boston Marriott

Table of Contents

1. Background ......................................................................................................................................................... 2

2. Definitions ........................................................................................................................................................... 2

3. GENI Concepts .................................................................................................................................................... 3

Figure 1: Network Locations ............................................................................................................................. 4

4. Benefits of Software-Defined Networking .......................................................................................................... 5

Figure 2: Open Interface to Hardware (OpenFlow)........................................................................................... 6

5. U.S. Ignite ............................................................................................................................................................ 7

6. GENI Program Solicitation Status ........................................................................................................................ 7

7. Discussion, Issues, and Comments ..................................................................................................................... 7

8. Action Items ........................................................................................................................................................ 8

9. OpenFlow Experiences ....................................................................................................................................... 9

Clemson University ......................................................................................................................................... 9 A.

Figure 3: Clemson OpenFlow Deployment ........................................................................................................ 9

Georgia Institute of Technology ...................................................................................................................... 9 B.

Figure 4: GT-RNOC OpenFlow Testbed ........................................................................................................... 10

Stanford University ....................................................................................................................................... 10 C.

Indiana University ......................................................................................................................................... 11 D.

Open Science Grid ......................................................................................................................................... 11 E.

10. WiMAX Experiences .......................................................................................................................................... 12

University of Colorado .................................................................................................................................. 12 A.

Figure 5: Throughput and Distance ................................................................................................................. 12

Rutgers University ......................................................................................................................................... 13 B.

Figure 6: Rutgers Core Network ...................................................................................................................... 13

Appendix A: Additional Resources .......................................................................................................................... 14

Appendix B: Meeting Agenda ................................................................................................................................. 15

Appendix C: Attendees ........................................................................................................................................... 16

Appendix D: Contacts for Additional Questions and Comments ............................................................................ 18

Page 2: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 2 of 18

1. Background

GENI (the Global Environment for Network Innovations) is an NSF-sponsored program for “exploring future internets at scale.”

1 Its goal is to encourage the development of innovative networking approaches and

applications that solve or surmount problems seen in the current Internet and wireless network environments to accommodate new and expanded network needs. As research and education have pushed the bounds of network usage to include moving terabytes of data at gigabit speeds; performing computation and storing data “in the cloud”; making computing available wirelessly to an increasingly mobile world; and supporting data-intensive applications in such diverse fields as physics and medicine, it has become clear that new approaches are needed. The GENI program’s goals are to promote innovations in networking that utilize the existing worldwide network infrastructure to create scalable, flexible (in terms of routing and quality of service), and secure networks for the future. The GENI program will allow researchers to test new network strategies at scale using existing network infrastructure to demonstrate protocols and applications without disrupting existing Internet traffic. Researchers and campus IT organizations are working together to satisfy these research and educational needs, while at the same time efficiently supporting campus IT operations and services.

Over the past four years, the GENI program has strongly supported development of OpenFlow, a suite of software and protocols that implements “software-defined networking” (SDN). OpenFlow creates virtual network “slices” to give researchers an environment to test at-scale alternatives to TCP/IP and other networking approaches. This can be useful within a campus as well as for communicating across campuses. A key point about the GENI approach is that end users can be using different internets without acknowledging that others are sharing the underlying infrastructure. Different applications can be running through different slices in parallel.

While the success of OpenFlow/SDN is as yet uncertain, 14 universities are actively participating in one or more GENI experiments; Internet2 and National LambdaRail are implementing OpenFlow in their networks; over 40 companies have joined the Open Networking Foundation to promote OpenFlow; and over a dozen of these recently demonstrated prototype routers implementing OpenFlow. The goal is to have 50 or more campus participants by the end of 2012, when it is expected that momentum will propel OpenFlow usage to 100-200 organizations over the next several years.

It is in this context that the National Science Foundation (NSF) sponsored a half-day workshop for campus IT leaders to discuss status and gain their support for OpenFlow/SDN implementation. Campus IT organizations provided leadership for the expansion and development of the current Internet and their participation will be integral to the development of the future internet. GENI seeks to enable the partnerships between campus IT organizations and researchers that will make this so.

The meeting was held on July 7, 2011, in Boston, Mass. and was attended by CIOs and other IT executives from 25 universities as well as representatives of EDUCAUSE, Internet2, GENI, and NSF (CISE and OCI).

2. Definitions

GENI: NSF program Global Environment for Network Innovations

GPO: GENI Program Office. Managed by BBN Technologies under an NSF grant.

Internet: Worldwide, interconnected data networks using IP (Internet Protocol).

internet: General usage for interconnected data networks; also, the hardware infrastructure underlying the Internet.

SDN: Software Defined Networking. Refers to the key concepts of separation of network data and control information, logically centralized network control (in an extensible Network OS), the interface between data and control (called OpenFlow), and virtualization of the network infrastructure into “slices.”

1 GENI website: http://www.geni.net/

Page 3: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 3 of 18

Slice: Independent virtual network running over the internet infrastructure via software installed in routers and other equipment.

OpenFlow: Software suite that controls slices and provides a “network operating system.”

WiMax: 4G wireless protocol adopted by GENI to set up campus wireless networks.

Controller: Server running OpenFlow software, the network OS, and applications.

3. GENI Concepts Based on presentation by Chip El l iott, D irector, GENI Program Office

GENI is a virtual laboratory for exploring future internets at scale. A key premise of the program is that it is infeasible to build physical networks “as big as the Internet” to test out new networking ideas, so these new internets have to run over the current network infrastructure without disrupting existing traffic. This has to be accomplished with minimal modification to the current Internet infrastructure. The plan is to create virtual “slices” of the worldwide network infrastructure and allow “deep programmability” through the introduction of GENI-enabled nodes running OpenFlow software; GENI racks of sliced, virtualizable computers and storage; and programmable WiMax nodes for wireless capability.

Slices are entire virtual networks running on the same physical infrastructure, with all aspects of the network from routers, circuits, and computers available independently to each slice. Using this approach, many different “future internets” can operate in parallel. The underlying software that allows the creation of slices is called OpenFlow, and existing (or new) routers must be “OpenFlow enabled” before slices can be created. Slices can exist in networks with devices that are not OpenFlow and will bypass these devices or, where necessary, by VLAN tunneling through these devices (this approach was successfully demonstrated to pass through non-OpenFlow-enabled regional networks in a multi-university experiment in November 2010). OpenFlow-enabled backbones now exist on portions of eight college campus networks and through several cities via Internet2 and National LambdaRail.

Page 4: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 4 of 18

Figure 1: Network Locations

WiMAX

ShadowNet

Salt Lake City Kansas City

DC Atlanta

Stanford UCLA UC Boulder Wisconsin Rutgers Polytech UMass Columbia

OpenFlow

BackbonesSeattle Salt Lake City Sunnyvale Denver Kansas City Houston Chicago DC Atlanta

OpenFlow

Stanford U Washington

Wisconsin Indiana Rutgers

Princeton Clemson

Georgia Tech

Page 5: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 5 of 18

Deep programmability is the idea of being able to program routers, firewalls, and other parts of the Internet infrastructure to create new protocols and types of networking. It requires the unbundling of hardware and software controlling these devices, or at least providing APIs that allow direct access to the core hardware functions, such as data packet passing. Devices that are OpenFlow enabled allow deep programmability in slices that do not interfere with existing network activity. Examples of faculty research demonstrating deep programming are:

University of Illinois-Urbana Champaign: programming innovative new routing protocols that let users monitor and select their own network paths to optimize services, with no waiting for network adaption time.

Columbia University: developing a new content distribution system (ActiveCDN) that shifts distribution in real-time as demand changes.

UMass-Amherst: operating a real-time weather forecasting service that creates slices to channel cloud computing resources to the part of the country with severe weather conditions.

Stanford University: programming real-time load-balancing functionality (Aster*x) deep into the network.

4. Benefits of Software-Defined Networking Based on presentation by Matt Davy, Chief Network Engineer, Indiana University -Bloomington

Campuses face multiple challenges to providing network services:

Rapidly changing computing technology, including the introduction of new devices such as smartphones.

Massively diverse user population and networking needs, from consumer electronics to high-performance computing.

Bandwidth demand growing 30-40% per year or more.

Technically complex environment, with as many as 15 different protocols.

Costs going up—both capital and operating.

Software-defined networking addresses all of these challenges by introducing virtualization and deep programmability, building a network operating system and applications “above the network devices” with APIs to an OpenFlow stub running on each network device. Complexity becomes easier to manage by hiding it behind this abstraction layer and takes networking in the same direction that general computing has been moving toward for years—but is a totally different approach than networking today.

Page 6: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 6 of 18

Figure 2: Open Interface to Hardware (OpenFlow)

Benefits of software-defined networking (SDN):

Virtualization of network routers and other control equipment. o Networks are currently defined “by the box” with integrated hardware, software, and

applications. o Virtualization will foster competition and innovation for each component.

Simplified management of complex campus networks. o Centralized management instead of individual device management (much as is being done with

the current generation of WiFi access points). o Remote management and instrumentation of the network

Reduced costs. o Reduced capital costs through the commoditization of network devices. o Reduced operating costs through centralization of management.

Ability to implement innovative network structures and applications without disrupting the existing Internet infrastructure.

o Introduction of innovation through software rather than hardware, resulting in reduced cycle time from laboratory to production.

Page 7: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 7 of 18

5. U.S. Ignite

From Chip E l l iott (Director, GENI Program Office ) , James Pepin (Chief Technology Officer, C lemson University), and Suzi Iacono (Senior Science Advisor , NSF)

This NSF program, as yet unfunded, is planned to foster the development of “next-generation applications” requiring gigabit bandwidth and utilizing broadband city infrastructure. GENI will be the control framework, with GENI racks and OpenFlow added in participating cities. Application focus areas will be transportation, energy, health, education, and public safety. Interest has been expressed by Chattanooga, Tenn.; Cleveland, Ohio; Lafayette, La.; Salt Lake City, Utah (regional); Philadelphia, Penn.; and Washington, D.C. Initial surveys have been done, and Layer 2 connectivity appears quite feasible to integrate these cities into the GENI environment, with project teams forming now for possible launch in the fall of 2011.

6. GENI Program Solicitation Status From Suzi Iacono (Senior Science Advisor , NSF) and Larry Landweber (GENI Project Office)

GENI has been funded for the past four years at the level of tens of millions of dollars. Although this is an experiment, the next stage is to scale up. While the NSF budget is expected to be flat, GENI is seeking additional funding for 2012 to supplement its current budget, which will allow the next solicitation for GENI grants to be published (Solicitation 3). There is some possibility that NSF will have additional money to do broadband research, which may then be available to the GENI program (not yet approved by Congress).

The goal is to expand to 50 GENI-enabled campuses over the next several years. Additionally, the solicitation will seek to expand GENI-enabled regional networks, Internet2 and National LamdaRail nodes, as well as placing GENI-racks (for programmable routers, caching, content distribution, and transcoding nodes, or distributed clouds) and adding WiMax base stations.

The GPO expects that responses will be collaborations between computer science and other academic departments with the campus IT organization, and sponsored by the CIO. The role of campus IT is considered “absolutely essential” to this effort. In addition to funding, the GPO will be arranging mentorship (“buddy system”) from existing GENI-enabled campuses to ensure efficient start-up at new campuses.

According to Iacono, “Scientists and engineers have to have access to the best networking in the world, and that needs CIO support.” This is a unique opportunity that comes along once every 20 years—now is the time to participate.

7. Discussion, Issues, and Comments From the general discussion

What are the most likely uses of Software-Defined Networking? o SDN can be important in computer science research; for the campus IT organization in improving

campus network management; and for doing large data transfers and experiments across multiple campuses and potentially around the world.

o On campus, OpenFlow and SDN could reduce complexity (much simpler than MPLS and VLANs) and provide improved security mechanisms.

o Across multiple campuses, OpenFlow and SDN could provide an opportunity to think differently about network capabilities.

What is required to get started in using OpenFlow? o There is no documentation that has step-by-step instructions or estimates of time involved. o The plan is to implement a mentoring “buddy” system to help new campuses get started. While

there is no certification, there are already OpenFlow “Gurus” on various campuses who will work with new campuses.

Page 8: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 8 of 18

o GPO is considering the creation of a “boot camp” for network engineers. o NSF has helped provide funds (at a low level) to have staff implement OpenFlow on campuses.

[GPO will help campuses prepare to request.]

What is the cost of implementing OpenFlow on top of existing campus networks? o Switches will support OpenFlow in firmware, and that can added to existing equipment. o We don’t know what Cisco will do in terms of equipment and pricing. o WiMAX base stations cost 3-4x versus 802.11 base stations.

How does OpenFlow address security? After 30 years of dealing with Internet security issues, CIOs would like to know how OpenFlow addresses security and solves the problems in today’s Internet.

o One of the design criteria for SDN was to make the network more secure. o There is no white paper yet on security in OpenFlow. This is recognized as an important action

iItem.

How is SDN programming controlled and managed at the level of individual campuses, as well as for the Internet as a whole?

o The network OS is programmed locally and can be controlled locally. o Anyone can decide to write code for a new “slice” of the Internet, but it is up to the local

manager of each controller to allow that slice to be recognized by the controller. If you don’t recognize that slice, then that GENI experiment is invisible to your network. [At the current time, requests for such recognition are passed among community members by e-mail.]

o Concern was expressed about management of controllers since they have the ability to modify tables and other functionality in switches, which raises significant security, efficiency, and reliability issues.

What is the governance and management structure for the OpenFlow software? o Stanford and other organizations have taken responsibility for maintaining reference

implementations of different layers of the OpenFlow software. However, no overall organization exists to manage this process, which is recognized as an issue by the OpenFlow community.

o It is still very early in the evolution of this software. o There is no standard procedure for fixing problems in the control software.

When a packet goes through the network, what does OpenFlow examine? o OpenFlow examines the opening bits and compares them to a routing table. If no matching entry

is found in the flow table, then the packet is forwarded to the OpenFlow controller to decide what to do.

8. Action Items CIOs should indicate campus interest by e-mail to Larry Landweber ([email protected]).

Members of the GENI community will produce a white paper on security in OpenFlow [led by Guru Parulkar, Stanford].

The GPO is developing a “boot camp” for campus technical staff; indicate interest to Chip Elliott ([email protected]).

Be on the lookout for next NSF GPO solicitation for GENI grants (approval of solicitation by NSF is expected within a few months).

Page 9: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 9 of 18

9. OpenFlow Experiences

Clemson University A.Jim Pepin, Chief Technology Officer

At Clemson, IT is a core function that supports research and education as well as administrative applications. They believe that universities have to move networking research forward and are participating in multiple efforts. They are one of the most heavily invested campuses in GENI.

Their researchers are interested in mobility, reconfigurability (expectation of resiliency and resource re-configuration), and security. They are involved in a variety of research projects, e.g., smart car research with BMW. They are looking at the speed and efficiency of OpenFlow in reconfiguring VMs in a research condo-cluster. Their many GENI projects are deeply integrated with campus IT, whose network engineers participate with students and faculty as part of project teams, such as OpenFlow mesh and mobility management, OpenFlow campus trial, on-demand VM cloud, pervasive P2P, network coding, and accelerated cloud deployment. They also are involved in joint projects with other universities including GENI racks (RENCI, Stanford) and WiMAX (UW-Madison).

Figure 3: Clemson OpenFlow Deployment

Georgia Institute of Technology B.Russ Clark, Research Scient ist ( jointly working with IT and CS )

Georgia Tech has a 120,000-port network over 160 buildings on three campuses. They have 2,100 centrally managed switches and 2,500 wireless access points.

With a network of this size and complexity, they need better tools than VLAN and SSID as unit of policy management. They are looking to use OpenFlow to improve control of their network through policy management,

Page 10: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 10 of 18

access control, capacity monitoring, simplified configuration, and better security control and monitoring. They have been able to build a Network Access Control system for residence halls with much less complexity using OpenFlow, with better capacity monitoring, simplified configuration, and better security monitoring.

They have built an OpenFlow testbed in four buildings with switches from three vendors. In addition, they have racks of computers for a student project in OpenFlow. They are now giving faculty and researchers the opportunity to put their servers on OpenFlow switches.

Figure 4: GT-RNOC OpenFlow Testbed

Stanford University C.Mark Miyasaki, Direct or of Networking Systems

At Stanford, central IT is working with the computer science department on GENI experiments. They are designing the next-generation network backbone for the university. Their hope is that GENI will solve future networking needs, particularly “providing centralized services with local control” and reducing the complexity of network management. Examples of this would be getting rid of VLANs, moving departmental firewalls into the network, making it easier for departmental moves across buildings where subnets need to be reorganized, and getting rid of load balancers and other “mission specific” equipment. They are taking a first step by working with the computer science and electrical engineering departments to dedicate a “slice” of their private network for central IT to run VoIP. This would eliminate the need for central IT to run a dedicated (duplicate) network for VoIP in these departments.

Page 11: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 11 of 18

Indiana University D.Bradley Wheeler, CIO, Professor of Information Systems

Indiana got involved with OpenFlow in 2006, but is only now starting to see it take off. They have OpenFlow running in several building and expect to be a significant user in two years. OpenFlow is being considered as part of their 10-year, 350-building campus network master plan that is currently being created.

They are concerned about a growing technology gap with what we need to do with research and education. Increasing demands for network services require increasing staff, and this can’t continue indefinitely. They are concerned about having a good, competitive market in networking to avoid lock-in to a single major player. All of this points to a new way of doing things—hence, GENI.

To help address the need to move from laboratory to deployment, IU has created “InCNTRE” for network “translational research” (moving from theory to practice). InCNTRE has 11 interns working this summer on various projects (http://incntre.iu.edu/initiatives/summer/index.php).

Open Science Grid E.Miron Livny, Professor , University of Wisconsin; chair of Open Science Grid; developer of Condor

Open Science Grid (OSG) is an NSF- and DOE-funded consortium whose independently owned and managed computing and storage resources use a common set of middleware to provide researchers with a distributed shared-grid infrastructure (www.opensciencegrid.org/About). It is currently working with DOE on developing a 100Gb network.

The question they are exploring re GENI is how to make Condor work with OpenFlow networks. What does it mean for a Condor grid to become an OpenFlow node? How do you make all the software coexist between OpenFlow and the current Internet-based infrastructure?

A potential model for partnership is to involve one or more commercial organizations, such as Red Hat supporting Linux.

Page 12: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 12 of 18

10. WiMAX Experiences

University of Colorado A.Dirk Grunwald, Professor of Computer Science

The University of Colorado is building a GENI-enabled “cognitive radio” testbed using WiMAX technology. WiMAX was designed for wide area mobile networks and provides better capabilities than 802.11 in terms of network integration, device and user authentication, security, and handover. The goal is coverage rather than capacity, although that can be increased by having more access points. Throughput depends on bandwidth and signal quality (distance from access point).

Figure 5: Throughput and Distance

Spectrum planning and access point siting is a challenge, as most spectrum is “line of sight” (buildings get in the way). The accuracy of spectrum planning tools is variable, so direct measurement of signal quality is important to successful implementation.

LTE (Long Term Evolution) is the next generation (4G) protocol being adopted by most cellular companies, but has problems for campus implementation. Spectrum is not currently available for campuses to set up their own networks, and access points cannot be directly programmed. LTE seems to be similar to WiMAX in other ways, but comes out of the telecomm community, while WiMAX comes out of the computer community.

Page 13: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 13 of 18

Rutgers University B.Ivan Seskar, Associate Director , IT

Rutgers has deployed WiMAX access points in three places on campus. Five different qualities of service can be specified by the client. WiMAX is providing better service in moving vehicles than WiFi. Personal computers with WiMAX capability can coexist with commercial and campus services (phones, however, are “locked” to cellular providers, so won’t connect to campus WiMAX). Rutgers is developing software to allow a single base station to operate as multiple virtual base stations. They are also experimenting with using WiMAX for other services, for example, as backhaul for temporary WiFi installations. As noted above, direct measurement is necessary to ensure good coverage.

Figure 6: Rutgers Core Network

Outdoor

ORBIT

GENI Open

Base Station

(WiMax)Outdoor RU

Wireless

WINLAB Tech.

Center

To MEGPI PoP

(Philadelphia)

Rutgers Core Network

L3 -> L2

Cook CampusBusch Campus

AR

L2

OpenFlow

Enabled Switch

OpenFlow

Enabled Router

AR

Access Router

(not OF)

LEGEND:

Outdoor

ORBITGENI Open

Base Station

(WiMax)

Outdoor RU

Wireless

Busch Campus

NetFPGA NetFPGA

IP8800IP8800

ORBIT

Grid

L2

1.) SB9

IP8800

IP8800

sw-top.orbit-lab.org

sw-out-top.orbit-lab.org

sw-co-top.orbit-lab.org

sw-sb09.orbit-lab.org

sw-out06.orbit-lab.org

Page 14: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 14 of 18

Appendix A: Additional Resources

GENI Layer2/Software-Defined Networking Campus Deployment Workshop (Boston, MA – July 7, 2011) http://www.educause.edu/Resources/Browse/GENI/34673 Full slides from the meeting. Additional information, including the agenda (see also Meeting Agenda) and white papers can be found at: https://mywebspace.wisc.edu/pchrist2/web/Openflow/

GENI: Exploring Networks of the Future www.geni.net NSF program site.

InCNTRE: Indiana Center for Network Translational Research and Education incntre.iu.edu

IU Networking iunetworking.blogspot.com Blog by Matt Davy, Chief Network Engineer, Indiana University.

OpenFlow: Enabling Innovation in Your Network www.openflow.org OpenFlow news, specs, Wiki, bug tracking.

Open Networking Foundation www.opennetworkingfoundation.org Network industry organization.

InteropNet OpenFlow Lab http://www.interop.com/lasvegas/it-expo/interopnet/openflow-lab/ - OpenFlow demonstrations from the Interop Expo, May 8-12, 2011

Page 15: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 15 of 18

Appendix B: Meeting Agenda

GENI Layer2/Software-Defined Networking Campus Deployment Workshop

July 7, 2011 – Boston, MA

12:00–1:00 p.m. GENI Meeting Kickoff Lunch Session

Meeting Welcome (Greg Jackson, Vice President, EDUCAUSE)

Motivation and Goals (Larry Landweber, GENI Project Office)

1:00–1:30 p.m. GENI Update

Chip Elliott (Principal Investigator and Project Director, GENI Project Office)

1:30–2:00 p.m. Candidate Technologies

OpenFlow (Matt Davy, Chief Network Architect, Indiana University)

WiMAX (Ivan Seskar, Associate Director of Information Technology, Rutgers University)

2:00–2:50 p.m. Panel Session—GENI-Enabled Campuses: Case Studies

Jim Bottum (CIO and Vice Provost for Computing and Information Technology, Clemson University)

Dirk Grunwald (Wilfred and Caroline Slade Endowed Professor, Department of Computer Science, University of Colorado)

Mark Miyasaki (Executive Director, Communication Services, Stanford University)

Bradley Wheeler (Vice President for Information Technology and CIO, Indiana University)

2:50–3:10 p.m. Break

3:10–4:45 p.m. Discussion

Campus deployment issues and strategies

Cooperative multicampus funding strategies

Proposal for funding

4:45–5:00 p.m. NSF Perspective

Suzi Iacono, Senior Science Advisor, NSF, CISE

Page 16: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 16 of 18

Appendix C: Attendees

The following attendees joined the July 7, 2011 GENI Layer2/Software-Defined Networking Campus Deployment Workshop:

Jim Bottum Chief Information Officer and Vice Provost for Computing & Information Technology Clemson University Russ Clark Director, Georgia Tech Research Network Operations Center (GT-RNOC) Georgia Institute of Technology Doug Comer Distinguished Professor of Computer Science Purdue University Steve Corbato Director, Cyberinfrastructure University of Utah Matt Davy Chief Network Architect Indiana University Leo Donnelly Senior Network Architect, University Information Systems Harvard University John Dubach Chief Information Officer University of Massachusetts Chip Elliot Principal Investigator and Project Director GENI Project Office Hakeem Fahm Director of Information Technology University of the District of Columbia Tracy Futhey Vice President and Chief Information Officer Duke University

Lev Gonick Vice President for Information Technology Services/CIO Case Western University Dirk Grunwald Wilfred and Caroline Slade Endowed Professor, Department of Computer Science University of Colorado Thomas Hauser Director of Research Computing University of Colorado Suzi Iacono Senior Science Advisor NSF, CISE Sally Jackson Chief Information Officer and Associate Provost University of Illinois at Urbana-Champaign Greg Jackson Vice President EDUCAUSE Dan Jordt Director, Research Networks University of Washington Michael Krugman Associate Vice President for Information Services and Technology Boston University Larry Landweber GENI Project Office Larry Levine Associate Vice Chancellor for IT and Chief Information Officer University of Colorado

Page 17: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 17 of 18

Bob Lim Chief Information Officer University of Kansas, Lawrence Miron Livny Chief Technology Officer of the Wisconsin Institutes for Discovery University of Wisconsin Gerry McCartney VP for Information Technology and CIO Purdue University Rick McMullen Director of Research Computing and Senior Scientist University of Kansas, Lawrence Mark Miyasaki Executive Director, Communication Services Stanford University IT Services Anita Nikolich Executive Director, Infrastructure University of Chicago Phil Oldham Provost and Vice Chancellor for Academic Affairs University of Tennessee, Chattanooga Guru Parulkar Executive Director, Clean Slate Program and Consulting Professor of Electrical Engineering Stanford University

Laura Patterson Associate Vice President & CIO University of Michigan, Ann Arbor Glenn Ricart Principal, The Aerie Consultant, GENI Project Office Ivan Seskar Associate Director of Information Technology Rutgers University Kevin Thompson Program Director NSF Johan van Reijendam Network Security Engineer Stanford University IT Services Donnie Vandal Executive Director Louisiana Optical Network Initiative (LONI) Rob Vietzke Executive Director, Network Services Internet2 Brad Wheeler Vice President for Information Technology, CIO & Dean Indiana University System Steve Wolff Interim Vice President & Chief Technology Officer Internet2

Page 18: GENI Workshop on Layer 2/SDN Campus Deployment: A Report

GENI Workshop Report

Page 18 of 18

Appendix D: Contacts for Additional Questions and Comments

For more information about this workshop or about GENI, please contact:

Larry Landweber, GENI Project Office, [email protected]

Chip Elliott, GENI Project Director, [email protected]

This report was written by Jerry Grochow, consultant to EDUCAUSE. Contact: [email protected]