amid new apps and virtualization, here’s how you’ll take

25
THE BUSINESS VALUE OF TECHNOLOGY Mobile app dev needs agile 10 | Oracle ups its analytics game 13 | WAN optimization vendors ranked 16 | 4 steps to secure virtualization 44 | U.S. is tops in tech 52 OCT. 17, 2011 [ PLUS ] INSIDE OPENFLOW Networking standard challenges the status quo p.38 Amid new apps and virtualization, here’s how you’ll take better control of your network p.33 By Art Wittmann Copyright 2011 UBM LLC. Important Note: This PDF is provided solely as a reader service. It is not intended for reproduction or public distribution. For article re- prints, e-prints and permissions please contact: Wright’s Reprints, 1-877-652-5295 / [email protected]

Upload: others

Post on 27-Mar-2022

0 views

Category:

Documents


0 download

TRANSCRIPT

THE BUSINESS VALUE OF TECHNOLOGY
Mobile app dev needs agile 10 | Oracle ups its analytics game 13 | WAN optimization vendors ranked 16 | 4 steps to secure virtualization 44 | U.S. is tops in tech 52
OCT. 17, 2011
[PLUS] INSIDE OPENFLOW
Networking standard challenges the status quo p.38
Amid new apps and virtualization, here’s how you’ll take better control of your network p.33
By Art Wittmann
Copyright 2011 UBM LLC. Important Note: This PDF is provided solely as a reader service. It is not intended for reproduction or public distribution. For article re- prints, e-prints and permissions please contact: Wright’s Reprints, 1-877-652-5295 / [email protected]
13 38
COVER STORY
New Way To Network Software-defined networking threatens to disrupt Ethernet and TCP/IP technologies
Inside OpenFlow The networking protocol still has much to prove in order to bring flexibility to the virtualized data center
13 Speedier Data Analysis Oracle has its answer to rivals’ in-memory database appliances
14 API Advantage Effective APIs for developers can help business expand
16 WAN Optimization Vendors Three vendors lead the way with WAN optimization appliances
18 Better Backup Cloud providers build globally dispersed chains of data centers
[QUICKTAKES]
CONTENTS THE BUSINESS VALUE OF TECHNOLOGY Oct. 17, 2011 Issue 1,313
33
informationweek.com Oct. 17, 2011 1
2 Oct. 17, 2011 informationweek.com
44 Virtualization Security Checklist These four steps can help you protect assets and respond to threats
Contacts 6 Editorial Contacts 6 Advertiser Index
4 Research And Connect InformationWeek in-depth reports, events, and more
8 CIO Profiles Put family’s needs ahead of career, CRST International’s CIO advises
10 Global CIO Agile development keeps pace with the mobile world
50 VMs Vs. The Network IEEE addresses virtualization’s switching and security problems
52 Down To Business U.S. still ranks highest in IT innovation, study finds
[CONTENTS]
INFORMATIONWEEK (ISSN 8750-6874) is published 22 times a year (once in January, July, August, November, and December; twice in February, March, April, and October; and three times in May, June, and September) by UBM LLC,
600 Community Drive, Manhasset, NY 11030. InformationWeek is free to qualified management and professional personnel involved in the management of information systems. One-year subscription rate for U.S. is 199.00; for
Canada is $219.00. Registered for GST as UBM LLC. GST No. 2116057, Agreement No. 40011901. Return undeliverable Canadian addresses to Pitney Bowes, P.O. Box 25542, London, ON, N6C 6B2. Overseas air mail rates are: Africa,
Central/South America, Europe, and Mexico, $459 for one year. Asia, Australia, and the Pacific, $489 for one year. Mail subscriptions with check or money order in U.S. dollars payable to: INFORMATIONWEEK. For subscription
renewals or change of address, please include the mailing label and direct to Circulations Dept., INFORMATIONWEEK, P.O. Box 1093, Skokie, IL 60076-8093. Periodicals postage paid at Flushing, NY and additional mailing offices.
POSTMASTER: Send address changes to INFORMATIONWEEK, UBM LLC, P.O. Box 1093, Skokie, IL 60076-8093. Address all inquiries, editorial copy, and advertising to INFORMATIONWEEK, 600 Community Drive, Manhasset, NY 11030.
PRINTED IN THE USA. .
8
47 DLL Hell Redux NuGet helps .NET developers tame out-of-control third- party components
48 Developer Madness New technologies could make Windows 8 migration costly
4 Oct. 17, 2011 informationweek.com
InformationWeek Analytics Take a deep dive with these reports
[ ]
Watch It Now [
At the IW 500 Conference, Cata lina Marketing CIO Eric Williams explained how the company uses predictive analytics. informationweek.com/ video/williams
Big Data: Marketer mines data for trends
More InformationWeek[ ] Government In The Cloud GovCloud 2011 lets government IT pros learn the latest on cloud options. Join us in Washington, D.C., on Oct. 25. informationweek.com/government/govcloud2011
Enterprise 2.0 Attend Enterprise 2.0 and see how to drive business value with collaboration. In Santa Clara, Calif., Nov. 14-17. e2conf.com/santaclara
Become A Security Detective In this all-day virtual event, experts will offer insight in how to collect security intelligence and analyze it to iden- tify new threats. It happens Oct. 20. informationweek.com/1311/event
Let The News Find You Get the news topics you follow—including healthcare, business intelligence, security—delivered to your in-box. informationweek.com/getalerts
Subscribe to our more than 800 reports at reports.informationweek.com
Never Miss A Report
>> HP’s Strategy: IT Pros Get Their Say informationweek.com/reports/hpstrategy
>> Service-Oriented IT informationweek.com/reports/serviceoriented
>> Best Practices: Ensuring Reliable UC Coming Oct. 31
Resources to Research, Connect, Comment
Follow Us On Twitter And Facebook[ ]
@informationweek fb.com/informationweek
Links State Of Storage IT teams are packing more information on fewer devices. Find out how smart CIOs will accelerate this trend.
informationweek.com/reports/2011storage
IT Automation Is Just The Start IT’s juggling to keep services running even as business customers pile on new requests. Automation is an important tool to help cope, but it should be coupled with external services and portfolio management.
informationweek.com/reports/itautomation
Safeguard Your VM Disk Files Find out best practices for backing up VM disk files and building an infrastructure that can tolerate hardware and software failures.
informationweek.com/reports/safeguardvm
At Your Service A service catalog is pivotal in moving IT from an unresponsive mass of corporate overhead to an agile business partner.
informationweek.com/reports/servicecatalog
The Data Dedupe Option As the volume of data continues to grow, IT pros are investing in new technologies, including deduplication.
informationweek.com/reports/datadupe
Please direct all inquires to reporters in the relevant beat area.
Index For Advertising and Sales Contacts go to createyournextcustomer.com/contact-us or call Martha Schwartz (212) 600-3015
[ ]
John Foley Editor, [email protected] 516-562-7189
Chris Murphy Editor, [email protected] 414-906-5331
Art Wittmann VP and Director, Reports, [email protected] 408-416-3227
Laurianne McLaughlin Editor In Chief, InformationWeek.com, [email protected] 516-562-7009
Stacey Peterson Executive Editor, Quality, [email protected]
516-562-5933
David Berlind Chief Content Officer, TechWeb, [email protected] 978-462-5315
REPORTERS Doug Henschen Executive Editor Enterprise software [email protected] 201-660-8467
Charles Babcock Editor At Large Open source, infrastructure, virtualization [email protected] 415-947-6133
Thomas Claburn Editor At Large Security, search, Web applications [email protected] 415-947-6820
Paul McDougall Editor At Large Software, IT services, outsourcing [email protected]
Marianne Kolbasuk McGee Senior Writer IT management and careers [email protected] 508-697-0083
J. Nicholas Hoover Senior Editor Desktop software, Enterprise 2.0,collaboration [email protected] 516-562-5032
Andrew Conry-Murray New Products and Business Editor Information and content management [email protected] 724-266-1310
Eric Zeman Mobile, wireless [email protected]
CONTRIBUTORS Michael Biddick [email protected]
Michael A. Davis [email protected]
ART/DESIGN Mary Ellen Forte Senior Art Director [email protected]
Sek Leung Associate Art Director [email protected]
INFORMATIONWEEK REPORTS reports.informationweek.com
Roma Nowak Senior Director, Online Operations and Production [email protected] 516-562-5274
Tom LaSusa Managing Editor, Newsletters [email protected]
Jeanette Hafke Web Production Manager [email protected]\om
Joy Culbertson Web Producer jculbertson@techwe\b.com
Nevin Berger Senior Director, User Experience [email protected]
Steve Gilliard Senior Director, Web Development [email protected]
INFORMATIONWEEK VIDEO informationweek.com/video
INFORMATIONWEEK BUSINESS TECHNOLOGY NETWORK DarkReading.com Security Tim Wilson, Site Editor [email protected]
NetworkComputing.com Networking, Communications, and Storage Mike Fratto, Editor [email protected]
InformationWeek Government John Foley, Editor [email protected]
InformationWeek Healthcare Paul Cerrato, Editor [email protected]
InformationWeek SMB Technology for Small and Midsize Business Paul Travis, Site Editor [email protected]
Dr. Dobb’s The World of Software Development Andrew Binstock, Editor In Chief [email protected]
InternetEvolution.com Future of the Internet Terry Sweeney, Editor In Chief [email protected]
READER SERVICES InformationWeek.com The destination for breaking IT news, and instant analysis
Electronic Newsletters Subscribe to InformationWeek Daily and other newsletters at informationweek.com/newsletters/subscribe.jhtml
Events Get the latest on our live events and Net events at informationweek.com/events
Reports Go to reports.informationweek.com for original research and strategic advice
How To Contact Us informationweek.com/contactus.jhtml
Editorial Calendar informationweek.com/edcal
Reprints Wright’s Media, 1-877-652-5295 Web: wrightsmedia.com/reprints/?magid=2196 E-mail: [email protected]
List Rentals Merit Direct LLC Phone: (914) 368-1083 E-mail: [email protected]
Media Kits And Advertising Contacts createyournextcustomer.com/contact-us
Letters To The Editor E-mail [email protected]. Include name, title, company, city, and daytime phone number.
Subscriptions Web: informationweek.com/magazine E-mail: [email protected] Phone: 888-664-3332 (U.S.) 847-763-9588 (outside U.S.)
ADVISORY BOARD
Robert Carter Executive VP and CIO, FedEx
Michael Cuddy VP and CIO, Toromont Industries
Laurie Douglas Senior VP and CIO, Publix Super Markets
Dan Drawbaugh CIO, University of Pittsburgh Medical Center
Jerry Johnson CIO, Pacific Northwest National Laboratory
Kent Kushar VP and CIO, E.&J. Gallo Winery
Carolyn Lawson CIO, Oregon Health Authority
Jason Maynard Managing Director, Wells Fargo Securities
Denis O’Leary Former Executive VP, Chase.com
Randall Mott Former Sr. Exec. VP and CIO, Hewlett-Packard
Steve Phillips Senior VP and CIO, Avnet
M.R. Rangaswami Founder, Sand Hill Group
Manjit Singh CIO, Las Vegas Sands
David SmoleyCIO, Flextronics
Ralph J. Szygenda Former Group VP and CIO, General Motors
Peter Whatnell CIO Sunoco
CenturyLink www.centurylink-business.com . . . 7
Citrix www.citrix.com . . . . . . . . . . . . . . . . . . . . . . 12
Dell www.dell.com . . . . . . . . . . . . . . . . . . . . . . . . C2
iDashboards www.idashboards.com . . . . . . . . . 43
ManpowerGroup www.manpowergroup.com . . 39
Microsoft www.microsoft.com . . . . . . . . . . . . . . . 9
Print, Online, Newsletters, Events, Research
8 Oct. 17, 2011 informationweek.com
Career Track How long at current company: I’ve been at this logistics company for seven years.
Career accomplishment I’m most proud of: The creation of Gazette Technologies, a software develop- ment company specializing in data warehouse software to serve the media industry.
Most important career influencer: As a young IT manager, I had a great manager and mentor named Tom Redder. Tom had the unique ability to challenge you to do your very best, while teaching by exam- ple. In the process, he was able to give you enough rope on a project to let you manage the process, but not enough to let you hang yourself.
Decision I wish I could do over: Many times, it’s very hard to bal- ance the desire for growth in your career with the needs of your family. I’m lucky to have two great daugh- ters and a very understanding wife, who have had to make several moves because of my career. I wish I had taken more time to better un- derstand their needs when making a move. Ask for and listen to your family’s feedback instead of just fo- cusing on new opportunities.
On The Job IT budget: $8.7 million
Size of IT team: 47 employees
Top initiatives: >> We’re migrating our freight management system off the main- frame to an open systems environ- ment. This will let us meet the needs of the business and reduce our technology cost of ownership
STEVE HANNAH CIO, CRST International
Leisure activity: Golf
Favorite president: John F. Kennedy, because of his ability to execute on a vision and be decisive
Last vacation: Caribbean cruise
Favorite sports team manager: Jim Leyland of the Detroit Tigers, who has the ability to develop and get the most out of good players
Tech vendor CEO I respect most: Cisco’s John Chambers
Smartphone of choice: BlackBerry
by moving off the mainframe, while embracing current technologies.
>> We’re redesigning our recovery plans to significantly reduce recov- ery time. Our new plans include the ability to fail over to an off-site recovery center and an off-site office facility capable of hosting 165 people.
How I measure IT effectiveness: The IT team reports to the business on 20 key items, including hard- ware performance, help desk call resolutions, software development resource allocations, and computer operations efficiencies. We continue to examine these metrics annually and update accordingly.
Vision One thing I’m looking to do better: Partner with business stakeholders to better understand the issues and how technology can bring better so- lutions. We’ve created an environ- ment built for growth and business analytics, and now is the time to drive the value home.
Lesson learned from the last recession: The importance and value of a multiyear technology plan for the company. The plan has to be flexible, of course, but such a guiding focus is especially valuable during tough economic times.
Kids and tech careers: The need to understand technology is key in any career decision given today’s environment. I didn’t try to steer my kids toward or away from a career in technology. Both have graduated college and are success- ful in their careers today. Neither are professional technologists but use technology very effectively.
CIOprofiles Read other CIO Profiles at informationweek.com/topexecs
Ranked No. 19 in the 2011
10 Oct. 17, 2011 informationweek.com
G
L O B A L C I OG
L O B A L C I O Apple is talking up its new iPhone 4S,
just 14 months after releasing iPhone 4, touting a faster chip, voice controls,
and hooks to cloud-based storage. Here’s a question worth asking: Is your company’s application development strategy ready for the blistering pace of the mobile world?
Mobile apps are a different beast from big, conventional enterprise IT projects. They tend to involve smaller teams. They’re often (not al- ways) less complex. Usability factors can make or break a mobile app, many of which are built for the end customer. And the deadline to cre- ate a mobile app is almost always terrifyingly short, as companies fret over falling be- hind smartphone-touting customers, and their competitors.
All these factors make agile de- velopment techniques a good fit for many mobile apps. I say “techniques” because companies often rely on elements of agile—like iterative development and frequent re- views by business unit partners—without embracing the complete agile methodology. And they often work with outside partners who do the actual development.
Another agile-friendly feature of mobile apps is that they don’t need to be perfect out of the gate, says Leigh Williamson, a distinguished engineer with IBM’s Rational dev tools divi- sion. Mobile developers will commonly put out an app with a limited slate of features and update it with more later, Williamson says, and that process fits with the iterative nature of ag- ile. “Good enough for 1.0” isn’t a mentality business IT shops tend to embrace easily. Guardians of company brands don’t do so well with it either, so together project leaders need to figure out—on the fly, during develop- ment—which features make the cut and which wait for the next version.
Williamson is seeing enterprise IT shops
who had never used agile development exper- iment with it when faced with a mobile proj- ect. When business units decide it’s time to build a mobile app, “they set timelines that are typically quite aggressive—six to eight months to get these out in production,” he says.
Another fact that favors agile development is that mobile apps are still the shiny new ob- ject—everyone is interested, and everyone has an opinion. One of the barriers to effective ag- ile development with conventional enterprise apps can be the short attention span of busi- ness unit leaders. Will they put in the time to review work every week or two, for months
on end, to make sure the latest code’s doing what they expected?
With mobile apps, “there’s con- stant stakeholder feedback,” Williamson says, since usability is critical and the interest level is sky
high. The challenge is likely to be too many conflicting opinions, in fact.
But better to sort those tensions out early, rather than when the app is nearly “done.”
As my colleague Thomas Claburn notes (informationweek.com/1312/apps), companies with mobile app fever first need a good an- swer for why customers need an app, and how it will pay off for the company.
After the business strategy, IT leaders face the tough choices about their mobile app dev plan. How much in-house talent do we need, versus relying on outsourcers? Should we invest in iOS and Android skills for native apps, or bet bigger on hybrid apps and HTML5? Does our team even understand our end customers enough to build apps for them? Do our IT se- curity skills transfer to assessing mobile risks?
And if we needed to deliver a new iPhone app for 1Q 2012, could we deliver?
Chris Murphy is editor of InformationWeek. Write to him at [email protected].
App Dev Must Get Agile Enough For Mobile
Creating mobile apps
will require different
techniques and a
Oracle CEO Larry Ellison introduced a new appli-
ance this month aimed at doing faster business data analytics, but he didn’t make a very convincing or exciting business case for it. So we’ll fill in the gaps for him.
Oracle’s new Exalytics Business Intelligence Ma- chine is an appliance using in-memory database process- ing to do analysis faster.
Exalytics is clearly an an- swer to in-memory-powered products such as SAP’s Hana Appliance, as well as business intelligence products such as QlikTech’s QlikView and Tibco’s Spotfire. It builds on Oracle’s focus on optimized software-hardware systems, a line that includes Exadata and Exalogic appliances.
The closest Ellison, in his speech at Oracle Open- World, came to articulating the business need for Exalyt- ics was in describing instan- taneous access to informa- tion. “Type in ‘36-inch Sa’ and it will instantly fill in ‘36- inch Samsung television,’ ” Ellison said. “Type in ‘36- inch So’ and it will instantly fill in ‘36-inch Sony televi- sion.’” It could be automo- tive data or financial data or “it could be anything,” Elli- son crowed—though the de- scription made it sound like a search engine. There were no business-use scenarios.
Ellison did dive into the product details. The hard- ware includes 1 TB of DRAM memory and 40 CPU cores from four multicore Intel Xeon processors. It scans 20 GB per second, so it can ex- plore the device’s memory in about five seconds. One ter- abyte doesn’t sound big, El- lison said, but the device will compress data by five to 10 times, so it’s equivalent to 5 to 10 TB of user-accessible data.
As for the software behind Exalytics, two in-memory, parallelized databases will be available. A new version of TimesTen, which was de- veloped for transactional workloads, will provide an in-memory relational data- base engine. And Oracle has adapted its Essbase database to Exalytics to support speed-of-thought multidi- mensional analysis.
Exalytics can be integrated with any server running an Oracle database, but Ellison extolled the virtues of pair- ing the device with Exadata. If the data is in Exalytics’ DRAM memory, the device will provide the ultimate in query performance, and if it’s not in memory, Oracle’s InfiniBand connectivity to Exadata will move the re- quired data into Exalytics in- memory cache nearly as quickly. “I recommend you buy both,” Ellison quipped.
Exalytics will support all current Oracle Business Intelligence Enterprise Edi- tion (OBIEE) applications unchanged. The appliance also runs an in-memory, par- allel-processing version of the Oracle Essbase OLAP database, so it will support a company’s existing perform- ance management applica- tions without changes. That’s a marked contrast to SAP’s Hana Appliance, which to date has required a new gen- eration of purpose-built in- memory applications.
Big Data Play For companies with big
data needs, Oracle laid out a strategy, including plans for a Hadoop appliance, Ha - doop data management soft- ware, a NoSQL database, and an enterprise-focused release of R analytics soft- ware, for statistical analysis.
The breadth of those plans is surprising, given that less than a year ago Oracle was discounting the importance of the NoSQL movement. Hadoop, one NoSQL plat- form, addresses one of the fastest-growing areas of data—unstructured data
such as Web log files, social media data, and mobile data with geospatial information.
The big caveat: Oracle gave no release dates for the big data appliance or related software. If Oracle doesn’t deliver soon, it may be effec- tively stalling for time, hop- ing to dissuade Oracle cus- tomers from experimenting with third-party NoSQL and Hadoop alternatives.
The promise is interesting. Exalytics, paired with the Big Data Appliance and R soft- ware, would bring instanta- neous in-memory analysis to bear on the results of Ha - doop MapReduce jobs, R statistical and predictive models, and graphical analy- ses. The results could then be delivered through the dashboards and reporting capabilities of OBIEE.
Oracle had to act, as com- petitors including EMC, IBM, and Teradata have been step- ping up their analytics and big data analysis products. Now it’s up to Oracle to show how fast it can become an active participant in the NoSQL and big data markets.
—Doug Henschen ([email protected])
[QUICKTAKES] ORACLE APPLIANCES
Where’s the hard sell?
[
Following on the suc- cessful use of APIs by
eBay, Amazon.com, and Salesforce.com, many com- panies have published their own such integration code over the last decade to attract outside developers to their businesses. What do developers think of these APIs?
“Hideously outdated doc- umentation ... that will make you want to slit your wrists,” one developer says of the
APIs with which he works. “Authorization sucks,”
says another. “At Facebook, everything
is broken,” says a third, not hesitating to name the source of his ire.
These comments are from a survey by Your Trove, an online service for developers attempting to connect to so- cial media APIs. Survey re- sults were presented earlier this month at the Business of APIs conference, which aimed to advance the art of constructing APIs for use by external developers.
Developers’ biggest com- plaint is that API documen- tation is unreliable, is out of date, and offers poor guid- ance on how to implement a particular API. They also say error messages they get when something goes wrong
are too vague to help them figure out the cause.
Coming up with effective APIs for developers can help businesses expand, as devel- opers add functionality and bring customers the com- pany couldn’t get on its own. Failing to do so can mean missing a major opportunity Gartner predicts that 75% of the Fortune 1,000 will offer a public API by 2014. How- ever, the Your Trove survey indicates many companies
still have a ways to go before their APIs provide a compet- itive advantage.
Common Reference Point LinkedIn added a hiring
service through an easy-to- use API, Adam Trachten- berg, director of the profes- sional social networking site’s developer network, said at the conference. Most of LinkedIn’s 120 million users just post a professional profile on the site, but wide- spread use of its APIs and online apps on other web- sites has made LinkedIn a common reference point.
About 30,000 developers have used LinkedIn’s APIs to establish connections to the site’s services. LinkedIn now derives 48% of its rev- enue from its Post a Job ser vice, charging $295 for
a 30-day job posting. Data analytics applied
through social media APIs “is a huge opportunity,” said Ryan Sarver, Twitter’s director of platform. Social Flow, for example, pub- lishes aggregate content from an identifiable group, such as Forbes followers, to see what they’re talking about. In doing so, it’s able to anticipate stock market movements 15 to 16 min- utes ahead of Bloomberg business news feeds, Sarver claimed. That’s “huge in terms of market move- ments,” he said.
Netflix transformed its movie-delivery business from being the largest user of the U.S. Postal Service to one of the largest users of Amazon’s EC2 cloud for its streaming service. It pro- duced APIs that opened its catalog and ordering ser - vices to the iPhone, iPad, Xbox 360, Nintendo Wii, Sony PlayStation, and other consumer devices. Its sub- scriber list expanded from 12 million to 20 million last year as result.
Salesforce offers develop- ers access to its CRM plat- form through public APIs, a capability that helped the company reach the $2 bil- lion mark in annual revenue a few months ago, said Dave Carroll, director of devel- oper evangelism.
Salesforce didn’t start out making it easy for develop- ers. In 2003, it charged $10,000 for a developer to access its documentation for
its first API, viewing the API as a profit center. There were almost no takers, Car- roll said.
The following year, Sales- force made the API free in order to attract more devel- opers. That meant a Sales- force CRM user could set up links between a back of- fice accounting or inven- tory systems and data in the Salesforce database, making the Salesforce apps more useful to customers, Carroll said. Salesforce then ex- panded its API set.
In 2006, i t launched AppExchange, where devel- opers could sell Salesforce apps. Salesforce now offers APIs to access its Chatter social networking service and RESTful APIs that work with various mobile de- vices. It also offers an API that customers can use to build apps on Force.com, its platform-as-a-service environment.
Giving outside develop- ers access to your com- pany’s internal services broadens your business, if done in the right way, Car- roll said. “Your API history ... reflects upon your busi- ness history and future evo- lution,” he added.
Carroll recommends com- panies keep APIs simple, have good documentation, and maintain backward compatibility. Otherwise, developers who used an early version will suddenly find their applications not working. —Charles Babcock
([email protected])
[QUICKTAKES]
14 Oct. 17, 2011
“Your API history ... reflects upon your business history and future evolution.” —Salesforce’s Dave Carroll
informationweek.com
Before you plunk down money for additional
bandwidth for your branch and remote offices, consider WAN optimization. This tech nol ogy delivers signifi- cant performance benefits and is a win for tight bud - gets because it can let IT de- lay the purchase of larger-ca- pacity connections.
WAN optimization appli- ances improve use of existing bandwidth by transparently reducing the amount of traf- fic pumped into the WAN. They smooth over TCP shortcomings via better win- dow management, for exam- ple. They optimize chatty upper-layer LAN protocols like MAPI, HTTP, and Win- dows file sharing.
We surveyed 486 IT pro- fessionals who use or have evaluated WAN optimiza- tion appliances to find out how they perform. Not sur- prisingly, Cisco Systems gets invited to the WAN opti- mization party by nearly half (49%) of respondents, largely because of its ubiq-
uity in enterprise data cen- ters. River bed is a close sec- ond, with 40%, handily beating out the rest of our field. Citrix, Juniper, Blue Coat, and F5 round out the top six.
Blue Coat, Cisco, and Riv - erbed have the highest rank- ings based on 10 evaluation criteria, including product performance and reliability (see chart, below).
While Blue Coat and Riv - erbed lead on performance in our survey, you generally won’t find much variation in raw deduplication rates among products. Where you will find differentiation is in how much data the appli- ances can pro cess on the LAN side. Vendors tend to scale their appliances based on WAN capacity, from T1 speeds to 1 Gbps and be- yond. This scaling doesn’t reflect network performance as much as the size of the in- ternal storage array contain- ing the deduplication data set, the number and types of CPUs, and RAM. The larger
the WAN pipe and the more capacity that’s consumed, the larger the disks must be to build an adequate diction- ary. And, of course, adequate CPU and RAM are needed to be responsive.
Automatic application de- tection and detection within HTTP are the WAN opti- mization features that are particularly important as companies “webify” more applications and adopt VDI. Blue Coat and Citrix have long touted their automated application discovery fea- tures that can poke into oft- used protocols, such as HTTP, and differentiate Out- look Web Access from Peo- pleSoft and act accordingly. Citrix, in its quest to move everyone to VDI, has a leg up on optimizing its own popular VDI protocols.
Not to be left out, other WAN optimization vendors
say their products can per- form some VDI optimization as well, though they’re usu- ally limited to TCP optimiz- ing and packet coalescing. While many optimizers offer manual configuration, where administrators specify the server IP address and other details, having built-in de- tection means one less task set added to your change- control processes. This is a key distinction to look for when shopping for a WAN optimization product.
Most respondents using WAN optimization are happy with their appliances and aren’t actively looking to re- place them. Still, vendors shouldn’t become compla- cent. Respondents say ad- vances in technology and a more cost-effective offering could spur them to kick out an incumbent. —Mike Fratto
([email protected])
[QUICKTAKES]
71% All data: InformationWeek WAN Optimization Appliance Vendor Evaluation Survey of 486 business technology professionals, April 2011
Weighted, aggregated score across 10 evaluation criteria, maximum possible score of 100%
WAN Optimization Appliance Overall Vendor Performance
Blue Coat
Cisco Systems
Riverbed
Citrix
Juniper
F5
How Top Vendors Stack Up Mean average ratings, based on a scale
of 1 (poor) to 5 (excellent)
Blue Coat Cisco Systems Riverbed
Product performance 4.1 4.0 4.2
Product reliability 4.0 4.1 4.1
Flexibility to meet needs 3.8 3.9 3.9
Pre-sales support quality 3.8 3.8 3.7
Product innovation 3.8 3.8 4.0
Operation cost 3.7 3.6 3.6
Breadth of product line 3.7 4.0 3.7
Post-sales support quality 3.7 3.8 3.8
Service innovation 3.6 3.6 3.7
Acquisition cost 3.5 3.4 3.3
Download the full 31-page report, free with registration, at informationweek.com/reports/wanopt
informationweek.com
Don’t look now, but as telecom companies ac-
quire cloud computing ven- dors, we’re seeing the begin- nings of cloud networks: chains of linked data centers owned by one company that let two or more data centers back up one another.
Among the acquisitions is Verizon’s January purchase of cloud data center pioneer Terremark for $1.4 billion. Three months later, Cen - tury Link said it would buy managed services host and cloud infrastructure pro - vider Savvis for $2.5 billion. In June, a subsidiary of Japan’s telecom giant NTT bought OpSource to estab- lish a cloud infrastructure business unit.
These pairings are bring- ing much needed geo- graphic distribution to the backup and recovery capa- bilities offered by cloud services providers. Compa- nies affected by the April outage at Amazon Web Services’ northern Virginia center would have wel- comed geographically di- verse backup and recovery options.
If you were an AWS cus- tomer in the Amazon data center that went down, you couldn’t easily designate the AWS data center in Dublin as your preferred failover site. You couldn’t even select AWS’s U.S. West Coast data center, unless you con- structed the network links to it yourself. You were stuck using a neighboring Amazon “availability zone,”
which in April wasn’t neces- sarily in a separate data cen- ter and in some cases froze up at the same time as the primary zone.
With new linked data center chains in the cloud, automated backup to a sep- arate geographic location is a definite possibility. The CenturyLink-Savvis combi- nation brought together 34 Savvis and 16 CenturyLink data centers in North Amer- ica, Europe, and Asia. The
Verizon-Terremark union combined 13 Terremark data centers with 36 Veri- zon hosted services data centers, for 49 in North America, South America, Europe, and Asia.
These regionalized groups of data centers have a hub that acts as a central facility to connect all customers to the Internet and other net- work junctions via high- speed pipes.
Geographically separated data centers have been linked before, of course, but these new groupings are be- ing interlinked by a single
telecommunications com- pany that can automatically implement connections as a service over high-speed lines.
Terremark’s Experience I asked Ben Stewart, sen-
ior VP of facilities engineer- ing at Terremark, whether customers could have used a distant Verizon data cen- ter as a backup site prior to the merger. It could have been done, he says, but it would have taken a lot of
work by telecom-savvy cloud customers. First, they would’ve had to study the rate tables of the various telecom carriers available on the route. Then they’d have to negotiate with each carrier for network seg- ments that could serve their needs and test to see if it all worked.
And what if something went wrong, like a router failure at a key network junc- tion? The customers would have had to make a half dozen phone calls to find out the source of the trouble.
Now if Terremark cus-
tomers want to link to an- other data center, they click on an order form box, and the link and additional ac- count are set up. The num- ber of carriers involved and the number of router hops is drastically reduced, improv- ing speed.
Terremark had initiated this approach on its own with its 13 data centers, but “it would have taken us a long t ime to achieve global reach,” Stewart says. “Now, with Verizon, we’re there.”
Terremark last month opened a 25,000-square- foot data center in Amster- dam, where it has quick ac- cess over a 20-Gbps line to one of the world’s largest Internet exchanges. The fa- cility is a hub for Terre- mark’s data centers in Frankfurt, London, Madrid, and Paris, providing a sim- plified routing path to the rest of the world.
Amsterdam is directly connected to Terremark’s network access point of the Americas, a huge, 750,000- square-foot facility in down- town Miami. So much traffic converges on the network loops that circle downtown Miami that it’s one of the five best interconnected cities in the world, Terre- mark says on its website. NAP of the Americas is a gateway to other points in the United States and is one router hop from Bogota, Co- lumbia, and two hops from Sao Paulo, Brazil.
“Amsterdam to Miami is
[QUICKTAKES]
18 Oct. 17, 2011
Terremark’s Amsterdam data center is one hop to the world[
informationweek.com
one hop at two-thirds the speed of light over fiber ca- ble. So Terremark cus- tomers in London, Madrid, Paris, and Frankfurt are all linked now, just two hops apart through Amsterdam,” Stewart says.
Customer Perspective Global companies, such
as InterContinental Hotels, would like to decentralize their mainframe applica- tions and make them avail- able to serve customers in many different locations. If InterContinental’s hotel room booking and cus- tomer relationship systems can be distributed, service to customers will seem in- stantaneous, says Bryson Koehler, senior VP of rev- enue and guest information. He wants customers to be able to check into their rooms from smartphones while in the taxi from the airport. That will only hap- pen with distributed ser - vices, Koehler says.
Running coordinated dis- tributed systems is difficult to do, but infrastructure as a service in the form of inter- connected data centers could make it a check-box option. Having established a work- load in the cloud, customers would just check off where else they’d like to have it run and indicate which data cen- ter is the primary copy.
Daisy-chained data cen- ters in the cloud will one day host carbon copies of key enterprise applications. In some competitive situa- tions, including the hotel business, distributed appli- cations will make a signifi- cant difference.
In addition to the 49 data centers that make up Terre- mark’s infrastructure as a service, Verizon also operates 200 telecommunications data centers that handle billing and customer service for its phone service customers. Since a telecom data center isn’t much different from a cloud data center, it’s possible that in the future Verizon will reserve space in the telecom facilities to give its data center chain more locations for dis- tributed systems.
The CenturyLink-Savvis combo has 50 data centers around the world, inter- linked data centers with access to key network junc- tions and exchanges. Ama - zon builds its data centers at prime network junctions as well, but at this point, it’s still up to the customer to navigate between one center and another.
In the future, no cloud data center will be an island. No tsunami, hurricane, or earthquake will be able to take down your systems, just because your primary data center loses power or has its operations otherwise disrupted.
Both virtualized systems and cloud workloads can mi- grate quickly to a new loca- tion. A running virtual ma- chine can transfer to a new home in milliseconds; given a few seconds warning, it can migrate with no data lost. Virtualized systems running inside a data center that’s part of a chain in the cloud may prove to be much more durable and available than existing high-availability de- signs. —Charles Babcock
([email protected])
[QUICKTAKES]
[COVER STORY]
he combination of Ethernet and TCP/IP is so powerful and so fundamental to the way we craft data center systems that it’s almost heresy to even suggest we move beyond those protocols. In the 1990s, Gartner famously predicted that Token Ring would supplant Ethernet by the end of that decade,
and Novell created its own networking protocol, as did Apple, rather than take on what they saw to be the flaws of the overly complicated TCP/IP. And yet here we are today: Token Ring is a relic, IPX and AppleTalk are footnotes in the storied past of multiprotocol routers, and Ethernet and TCP/IP are the dominant networking technologies.
While no one in their right mind suggests completely replacing Ethernet and TCP/IP, anyone who’s struggled to automate data center load management in today’s virtualized data centers knows that current networking protocols present a challenge. For compa- nies to make the most efficient use of their virtualized servers, they must move work- loads around their data centers, but doing so implies moving network connectivity along with performance assurances, security, and monitoring requirements. Today, that’s either impossible to do automatically, or the method for doing it is highly proprietary. And virtualization isn’t the only challenge—as businesses add more applications to their net- works, they need to address the unique needs of those apps at a policy level.
Quite simply: Networking must change if it’s going to keep up with what businesses want to accomplish. Imagine networks that support both lots of live streaming video as well as fi- nancial and healthcare transactions at the core. For video, if a network gets congested, the thing to do is drop frames at the source. There’s no point in delivering voice or video data late. Meanwhile, the network never should drop packets of financial data. A smarter high- level policy might be to define separate paths through the network for the two different types of data. In regulated industries, network designers may want to set policies that make it im- possible for certain types of data to hit various parts of the network, or ensure that security appliances always look at some flows of sensitive data. Simultaneously, and possibly sepa-
New technology will disrupt the
networking world, which for decades has
revolved around Ethernet and TCP/IP—and
stalwart vendors such as Cisco
By Art Wittmann
T
rately, IT architects will want to create policies to ensure that certain essential services are highly available and pro- tected with a disaster recovery plan.
While it was possible to set up envi- ronments that support some of these policies when applications and services were tightly coupled with their servers, virtualization makes such a static config- uration hopelessly outdated. Loads change and servers fail—and virtualiza- tion lets you deal with all that, but only if the network can respond to a layered set of policies that must be observed in a highly dynamic environment. Network configurations—just like virtual serv - ers—must reconfigure themselves in the blink of an eye, and to do that, bridging and routing protocols must evolve.
So far, they haven’t. Network engi- neers are still versed in the command line interfaces of the switches they run. Policies involve writing router rules and setting access control lists, usually by crafting them in proprietary formats, and then using scripts to apply those rules to devices across the network. Even where better tools exist, network designers can set the quality of service, VLANs, and other parameters, but the Layer 2 switching rules are set by Ether- net’s Spanning Tree protocol and the routing rules are dictated by TCP/IP. There’s little ability to override those mechanisms based on business rules.
At a conceptual level, the answer has been dubbed “software-defined net- working,” or SDN—letting network engineers specify configurations in high- level languages, which are then com- piled into low-level instructions that tell routers and switches how to handle traf- fic. The idea is to give engineers more complete access to the lowest-level func- tions of networking gear so that they, and not TCP/IP or Spanning Tree, dic- tate how network traffic should move.
At the same time, engineers would work in a higher-level language to more easily describe complex con- structs implemented as simple rules on a router or switch. It would be a lot
like how a programmer writes in C++ or Visual Basic, and the commands are then compiled into the machine lan- guage of the processor.
Departure From TCP/IP In a software-defined network, a
central controller maintains all the rules for the network and disseminates the appropriate instructions for each router or switch. That centralized con- troller breaks a fundamental precept of TCP/IP, which was designed not to rely on a central device that, if discon- nected, could cause entire networks to go down. TCP/IP’s de sign has its roots in a day when hardware failures were much more common, and in fact part of the U.S. military’s Defense Ad- vanced Research Projects Agency’s in- tent in sponsoring the original research behind the Internet was to develop Cold War-era systems that could con- tinue to operate even when whole chunks of the network had been va- porized by a nuclear bomb.
Today’s needs are far different. Letting virtualized servers and other network re- sources pop up anywhere on the net-
work and instantly reroute traffic as they do is far more important than gracefully recovering from router or switch crashes. Large enterprise Wi-Fi networks already make wide use of controller- based architectures, and the SDN con- cept is well proven there. Breaking into the data center and other core enterprise network functions is another matter.
Two major obstacles stand in the way of generalized acceptance of software- defined networks. The first is the ab- sence of a technical specification that describes how hardware vendors should implement the SDN constructs in their products. That problem’s easy to solve, and good progress is being made with the OpenFlow standard, first proposed by Stanford researchers and now on its way to becoming a recognized standard.
The second problem is tougher to solve because it involves convincing the likes of Cisco, Juniper, and Bro- cade—the three vendors of TCP/IP networking equipment to both the en- terprise and to carriers and big-data In- ternet companies—that it’s in their in- terests to participate in OpenFlow.
OpenFlow itself doesn’t fully solve
NETWORKING[COVER STORY]
What Are The Most Important Business Goals Delivered Through Virtualization?
2011 2010
52% 40% Ability to deploy IT services faster
51% 52% Disaster recovery
17% 22% Ability to build prototype IT services faster
15% 6% Reduced data center carbon footprint
15% 35% Continuous data protection
13% 8% Ability to use fewer IT staffers in data center
4% 6% Self-provisioning by business units
1% 8% Ability to charge business units for IT resources
Data: InformationWeek Virtualization Management Survey of 396 business technology professionals in August 2011 and 203 in August 2010
36 Oct. 17, 2011 informationweek.com
the problem of creating a software-de- fined networking environment, but it adds some important pieces missing from the existing IP network manage- ment and control protocols.
First, OpenFlow defines what a con- troller is and how it can connect securely to network devices that it will control. Second, OpenFlow specifies how a con- troller will manipulate a switch’s or router’s forwarding table, which specifies how incoming packets get processed and sent on. Previous to OpenFlow, there was no standardized way to di- rectly manipulate the forwarding table, so SDNs were either completely propri- etary or functionally handicapped.
What Will Cisco Do? It’s not hard to imagine why industry
heavyweights would be wary of efforts to remove the brainpower from their de- vices and put it on centralized con- trollers. That Cisco and other networking vendors enjoy fat profit margins on routers and switches has everything to do with their providing the network hardware, software, and—more often than not—management tools. A fair analogy is the margins earned by Ora- cle/Sun, IBM, and Hewlett-Packard on their proprietary Unix systems vs. the margins on their x86 servers. In the x86 world, Intel, Microsoft, and VMware, not hardware makers, earn the fat margins.
OpenFlow has received enthusiastic support from enterprise-centric net- working vendors such as Extreme and NEC, with NEC and startup BigSwitch the first out of the gate with con- trollers. Both Juniper and Cisco are participating in the Open Network Foundation but have yet to announce products supporting the standard. Bro- cade is an enthusiastic supporter but views telecom carrier networks and very large Internet businesses as the most likely first customers.
At some point, though, the heavy- weights may have no choice but to offer OpenFlow-based enterprise products, too. Broadcom and Marvell, which make
the chips for many switches, both sup- port OpenFlow. So whether Cisco likes it or not, enterprise customers will have the option of buying affordable, quality products that support the standard.
OpenFlow doesn’t necessarily doom leaders like Cisco. If Broadcom and Mar- vell become the Intel and AMD for the enterprise switching market, Cisco can recast itself as Microsoft or VMware. No one understands the complex issues of managing traffic like Cisco does, and if it were to position itself as the premier maker of network controllers in an OpenFlow world, its customers would gladly grant it that status. Cisco won’t let another company assume that role. If it doesn’t embrace OpenFlow, it’ll at least try to offer a proprietary equivalent.
However the move to software- defined networks plays out, Cisco in particular has a strong hand to play. Its already tight relationship with VMware and its leadership positions in both storage networking and data network- ing will make Cisco hard to beat.
The transition to software-defined networks will happen, but predicting the timing is much trickier. There’s no backward compatibility for OpenFlow or any other SDN scheme. Fundamen- tal hardware changes are required to make SDNs perform at high speeds. Cisco paints itself as cautiously enthu-
siastic about OpenFlow. Indeed, if it can see a way to remain the preferred switch vendor with healthy margins, all while supporting OpenFlow, Cisco may see the technology as the magic bullet that forces a networking hardware re- fresh faster than the current five- to eight-year cycle. Meanwhile, other re- lationships are moving ahead. NEC, for example, is developing an OpenFlow virtual switch for use with Windows Server 8 and Microsoft’s Hyper-V server virtualization software.
The final and most unpredictable variable in this equation are network management teams themselves. After being well served for 30 years by Eth- ernet and TCP/IP’s fundamental proto- cols, they’ll move very cautiously to software-defined networks and central- ized controller schemes.
In some environments, where mas- sive data centers and highly parallel throughput server farms are the norm, the transformation can’t happen fast enough. Google, Microsoft, Facebook, and Yahoo are all members of the Open Network Foundation driving Open- Flow. For those with more pedestrian setups, getting comfortable with an SDN will take some time.
Write to Art Wittmann at [email protected].
[COVER STORY] NETWORKING
2.7 Data: InformationWeek 2011 Virtualization Management Survey of 396 business technology professionals, August 2011
Server virtualization
Storage virtualization
Desktop virtualization
I/O virtualization
Network virtualization (e.g., OpenFlow, Cisco Nexus, NextIO, HP)
How important are these virtualization technologies to your company’s overall IT strategy?
1 Not important Very important 5
Virtualization And IT Strategy
(Mean average)
Server virtualization is now a proven practice, creating a cost- effective means of allocating com-
pute resources to changing user and ap- plication requirements. But when packets leave the server, they still pass through a traditional switching architec- ture, which doesn’t have that same level of flexibility.
The goal of software-defined net- working (SDN) is to bring the benefits of virtualization—shared resources, user customization, and fast adapta- tion—to the switched network. SDN puts the “intelligence” of the network into a controller or hierarchy of con- trollers in which switching paths are centrally calculated based on IT-de- fined parameters, and then down- loaded to the distributed switching architecture.
The cost savings are apparent. The expensive part of a high-end switch is the sophisticated control plane and cus- tom silicon. By moving the control plane to a centralized controller, the ar- chitecture can use inexpensive com-
38 Oct. 17, 2011 informationweek.com
Controller
Switches
Central Network Command
Software-defined networking centralizes the control plane, using a controller through which IT can write and enforce rules for how different types of data are routed. The network keeps a distributed forwarding plane but can use commodity OpenFlow-enabled switches and routers, since the network’s intelligence is in the controller.
By Jeff Doyle
virtualized data center
[COVER STORY] NETWORKING
Is OpenFlow just the latest attempt to centralize network control, or is it something wholly new? People across the industry have been debating whether OpenFlow is really an industry game-changer.
“The ‘new thing’ in my mind is the timing,” says Kyle Forster, co-founder of Big Switch Networks, a startup developing a network controller based on OpenFlow. “The first leg of this is silicon trends—merchant silicon is changing the rules around who makes what. The second leg is supply chain trends. Server vendors must react to the Cisco UCS by expand- ing into networking. Last, customer requirements are chang- ing. Networks for VMs and for the increasingly disparate types of mobile and embedded devices on a campus LAN are dif- ferent from networks of five and 10 years ago.” OpenFlow itself is just a definition of instruction and mes-
saging sets between a server-based controller and a group of switches. It’s how the controller uses the instruction set that will determine how disruptive the protocol becomes to the networking industry. Ivan Pepelnjak, chief technology adviser at NIL Data
Communications, has written a number of blog posts (at blog.ioshints.info) questioning whether the expectations be- ing set around OpenFlow are overblown. “I can’t see a single area where a TCAM download protocol—which is what OpenFlow is—can do things that a router or switch could not do. There are things that can be done with OpenFlow that cannot be done with current protocols. But then one has to ask oneself: Why has nobody yet developed a proto- col to address those things?” Pepelnjak does see the protocol as a means of consolidat-
ing and streamlining the way networks are controlled. He says OpenFlow will make it easier to implement customized func- tionality, and it will give third-party software lower-level con- trol in multivendor networks. Forster counters that OpenFlow will make a lot of difficult
networking problems easier to solve. “Automating move-add- change requests, virtual network overlays, multipath forward- ing, and automated provisioning are all relatively straightfor-
ward engineering initiatives with an OpenFlow-style central- ized control plane,” he says. “While they can be done with a distributed control plane or, practically speaking, with a classic set of networking features and a lot of Expect scripts, these classic approaches tend to be a fragile set of scripts that open up a Pandora’s box of corner cases.” Martin Casado, one of OpenFlow’s developers and currently
CTO of Nicira Networks, says he’s seen some “pretty cool stuff” built on OpenFlow, like network connectivity managed by a high-level language that plugs into an existing, proprietary security framework. The key points are that OpenFlow gives network mangers programmatic control of their networks us- ing industry standards, he says, “using the same distributed system libraries and packages they use to orchestrate the rest of their infrastructure.” Ultimately, the success or failure of OpenFlow—and, more
widely, of software-defined networking—depends on how well controllers integrate with switches and how widely avail- able OpenFlow-capable switches become. “I think we’re a long way out,” Casado says, “before a controller vendor can support an OpenFlow-capable switch without a fairly close partnership between the two companies.” Dave Ward, CTO of Juniper Networks’ Infrastructure Prod-
ucts Group, says the biggest benefit of OpenFlow is giving IT more visibility and control in virtualized data centers. “As end- points move around in the ever-expanding network foot- prints, we think that flexible methodologies are needed to en- able and disable certain traffic conditions or to enable and disable certain types of traffic,” Ward says. “Having such func- tionality via a common interface could prove very valuable for anyone operating an infrastructure that is experiencing high rates of change.” So far, Cisco and Juniper have supported OpenFlow, and Ju-
niper has demonstrated support in some of its products. “The real question about OpenFlow,” Ward says, “is not if it provides additional capabilities in any one device, but whether it can deliver those capabilities across a heterogeneous network.”
—Jeff Doyle
OpenFlow’s Impact: Industry Debate Rages STILL EMERGING
modity switches built with standard silicon. Reducing the cost of each switch by 70% or more, spread across a data center or campus, quickly adds up to real money.
SDN is also about improving per- formance. A centralized control plane lets companies customize their net-
works without depending on or coor- dinating different vendor operating systems. Networking pros can quickly change what data types get forwarding priority, as business requirements change.
The first step toward SDN is to de- fine a messaging protocol between the
controller and the individual switches making up the forwarding plane. This is where the emerging OpenFlow net- working standard comes in. OpenFlow enables open, virtualized, and pro- grammable networks. Each of these el- ements is key.
Open: The standardized instruction
set means any OpenFlow controller can send a common set of instructions to any OpenFlow-enabled switch, re- gardless of the vendor.
Virtualized: IT can specify different forwarding rules for different data types, creating multiple logical for- warding paths over the same physical network, depending on the needs of a particular app.
Programmable: The still-evolving OpenFlow instruction set lets IT create rule sets that work in combination with a switch vendor’s configuration options, or independent of them. With its roots in academic research net- works, OpenFlow lets users try new ideas or create new protocols inde- pendent of any vendor. Most impor- tant, IT can program a network for specific application requirements.
Roots Of Network Virtualization The separation of the control plane
from the forwarding plane isn’t new with OpenFlow. It has been used in the design of high-end routers since the mid-1990s and telephony switches long before that. The initial motivation was to protect each switch element from a degradation in an- other: A very busy route processor doesn’t hurt forwarding performance, and peaks in network load don’t pull processing resources away from the control plane. Significantly, that sep- aration—first within a single chassis, and more recently with a single processor controlling multiple switch- ing fabrics—provided an environment for the development of high-availabil- ity features.
Multiprotocol label switching (MPLS), another key technology in modern networks, also has features that relate to this trend, since it builds an “intel- ligent” access layer around a relatively dumb but high-performance network core. That structure enables the cre- ation of flexible, innovative services over a homogeneous backbone. MPLS also reflects a trend in networks simi-
lar to what we’re seeing in server vir- tualization: A single physical network uses a logical component—MPLS—to allow the overlay of multiple virtual- ized network services.
What’s new and different with OpenFlow is that, in theory, it could work with any type of commodity switch.
Operating in the gap between a cen- tralized control plane and a distributed forwarding plane, OpenFlow defines a protocol that lets a controller use a common set of instructions to add, modify, or delete entries in a switch’s forwarding table. An instruction set might not sound like a technology breakthrough, but it’s all about what people do with that instruction set, says Kyle Forster, co-founder of Big Switch Networks, a startup building network- ing products based on the OpenFlow standard. If you read the x86 server in- struction set, it “isn’t obvious that you could build Linux, Microsoft Word, or the Apache Web Server on top,” Forster
says. “I think OpenFlow is the same way. It isn’t about the basics. It’s all about the layers and layers of software built on top. That is where the benefits are going to be felt.”
Origins Of OpenFlow OpenFlow began at Stanford Uni-
versity, under professors Nick McKe- own, Martin Casado, and others. Their goal was to create an environ- ment in which researchers could ex- periment with new network protocols while using the campus’s production network. It let researchers try out those protocols in a realistic environ- ment without the expense of building a test network or the risk of blowing up the campus network and disrupt- ing production traffic.
The first consideration for using OpenFlow outside of academia was to scale bandwidth in massive data cen- ters. Forster calls this the “million- MAC-address Hadoop/MapReduce” problem. For uses such as Google’s search engine, parallel processing of massive data sets takes place across clusters of tens or hundreds of thou- sands of servers. For such big data ap- plications, “it doesn’t take much back- of-the-envelope calculating to come to the conclusion that a tree-based archi- tecture will require throughput on core switches/routers that simply can’t be bought at any price right now,” Forster says.
Interest in OpenFlow has since ex- panded to cloud computing and vir- tualized services companies. Earlier this year, six companies—Deutsche Telekom, Facebook, Google, Mi- crosoft, Verizon, and Yahoo—formed the Open Networking Foundation (ONF) to support development of SDN and, by extension, the Open- Flow protocol. ONF now has more than 40 members, mostly vendors, which pay $30,000 a year to fund de- velopment of the standard.
The version of the OpenFlow speci- fication in most widespread use is ver-
Oct. 17, 2011 41
Get This And All Our Reports—Free
Sign up for a free membership with InformationWeek Reports and get our full report on OpenFlow and software- defined networks at informationweek.com/reports/openflow
This report includes additional analysis, an in-depth tutorial on the OpenFlow protocol, and seven diagrams illustrating software-defined networks.
sion 1.0.0, which in December 2009 defined three components: controller, secure channel, and flow table.
The newest version of the spec, 1.1.0, released in February, adds two more components: group table and pipeline (see diagram, right).
The controller is of course the con- trol plane, and it provides IT pro- grammabil i ty to the forwarding plane. The controller manages the forwarding plane by adding, chang- ing, or deleting entries in the switches’ flow tables.
The controller manages the switches across a secure channel. “Secure channel” is in fact slightly misnamed, since it doesn’t provide any security on its own. While the channel is nor- mally authenticated and encrypted using Transport Layer Security, it can be simple unsecured TCP. Imple- menting an SDN architecture using unsecured communication channels is utter folly, of course, but it can be done. Also, messages across the se- cure channel are reliable and ordered, but aren’t acknowledged and there- fore aren’t guaranteed.
Who Needs It While the most intense interest in
OpenFlow is in large-scale data cen- ter applications, there’s already spec- ulation about how OpenFlow-based SDNs can benefit other industries. The fact that Deutsche Telekom and Verizon are among the founders of the ONF provide some hints at next applications.
In mobile networks, OpenFlow can help solve a notoriously hard problem in IP: monitoring, metering, and serv- icing user traffic. In service provider networks, OpenFlow may provide a more workable alternative to MPLS traffic engineering.
More broadly, the ability of a network operator to create custom functions ap- plicable to its own network, and then apply those functions to switches from
multiple vendors, is the true promise of SDNs. Technology users have always found surprising ways to use the tools available to them, innovating from the bottom up rather than limiting them- selves to top-down vendor systems. An open instruction set could accelerate that innovation.
OpenFlow version 2.0 is in the works, bringing with it a generalized networking instruction set, as well as a standardized API to the network oper- ating system that’s planned for 2012 release. That upgrade would make it easier for third-party network manage- ment systems and data center provi- sioning tools to interact with Open- Flow controllers. Manufacturers of commodity switches, including Hewlett-Packard and Extreme Net- works, are starting to line up behind OpenFlow and SDNs, as are new ven- dors such as Nicira Networks and Big Switch Networks that are focused on network control systems.
OpenFlow has the right combina- tion of industry and academic backing
to ensure that its development and evolution continue. It has the potential to turn the networking world on its head by disrupting the market posi- tions of high-end router and switch vendors, including Cisco and Juniper. Or those vendors have the potential to turn this trend to their advantage (see related story, p. 33).
While still in its infancy, OpenFlow has already been used to demonstrate fixes to old problems. Conventional Ethernet switching architectures have long been stuck in inefficient, inflexi- ble tree structures. Those don’t look sustainable for a lot of what compa- nies will want to accomplish. Open- Flow can be expected over the next few years to change the landscape of large data center and enterprise cam- pus networks.
Jeff Doyle specializes in IP routing pro- tocols, MPLS, and IPv6 and has de- signed large-scale IP service provider networks. You can write to us at [email protected].
42 Oct. 17, 2011 informationweek.com
[COVER STORY] NETWORKING
The OpenFlow Model
Flow Table 0
Flow Table 1
Group Table
This diagram shows the OpenFlow 1.1.0 components, including a controller to program the switches, a secure channel to communicate with switches, flow tables that maintain the directions for the forwarding plane, and the OpenFlow protocol.
What’s the most danger- ous threat to your virtualized systems? Hint: it’s not the latest zero-day exploit. The most pressing risk is IT staff who have full privileges in these systems. Take the February 2011 attack by an
IT employee who’d been laid off from a pharmaceutical company. The ex- employee logged in remotely and deleted virtual hosts that ran the com- pany’s critical applications, including email, financial software, and order
tracking. The company sustained about $800,000 in losses from a few keystrokes, the FBI says. We’re not saying your administrators
will go rogue, but our September 2010 survey on virtualization security found that access to virtualization systems is fairly widespread: 42% of respondents say administrators have access to guest virtual machines. It only makes sense to take precautions, such as security monitoring, so that one person,
whether maliciously or inadvertently, doesn’t bring down critical apps and services. Virtualized systems make it harder
to manage risk, but sensible security practices still apply. Here are four steps to help you protect virtual assets and respond to threats and incidents.
1. Secure Layers Virtual environments are made up of
layers, so you’ll want to implement se- curity controls at each layer within the virtual architecture, including controls that you already have in your environ- ment. For example, at the virtual switch layer, redirect traffic out to a fire- wall or an intrusion prevention system to monitor traffic. Alternatively, use a virtual firewall within the VM cluster. The primary virtual layers to address
include the hypervisor and guest oper- ating systems, the virtual network that connects VMs, the physical network, the virtualization management system, and physical storage of VM images.
2. Define And Document You can’t place security controls
around elements you don’t know are
Virtualization Security Checklist
Yes No 42%
Don’t know
8%
41%
9%
44 Oct. 17, 2011 informationweek.com
Four steps to a more secure virtual infrastructure
By Michael A. Davis
Oct. 17, 2011 45
[VIRTUALIZATION SECURITY]
there. Thus, it’s vital to have accurate, up-to-date information on your virtual environment. That means being able to identify the components in your virtual infrastructure. Make sure you docu- ment the primary functions of these components and their owners and administrators. It’s also critical to understand how
data traffic flows through your infra- structure, because the type of data will determine which controls are needed. For example, most companies take ex- tra steps to secure virtual database servers that store critical business data. However, your backups also have copies of this confidential data. Track data flows from start to finish to iden- tify critical areas where additional se- curity measures are needed.
3. Restrict And Separate Access control and authorization are
core security functions, particularly for virtual environments, where control over a single hypervisor can also mean control over the multiple virtual machines that run on top of that hypervisor. As in the physical world, administrator access to systems and their authorization to per- form specific functions should be as specific as possible. Every administrator in your shop doesn’t need to be able to spin up, modify, and shut down every virtual server in your data center. Log- ging is another critical security func- tion. It lets you monitor and track all the activities that take place within the virtual environment. The management consoles from the
major hypervisor vendors provide de- cent role-based access controls that re- strict administrators’ permissions to perform basic tasks, and you should take advantage of these capabilities. However, these management consoles don’t validate change requests, log all changes, and implement capabilities such as two-factor authentication. For that, you’ll need third-party soft-
ware from vendors such as HyTrust and Catbird, which provide additional
controls, such as change management. With these controls, major changes can’t be made to critical systems with- out authorization from another admin- istrator in addition to the one request- ing the change. These third-party tools can also split functions among different IT groups. For instance, the IT security team can be put in charge of managing the logs from the virtualization man-
agement console instead of the server administrators. This separation of du- ties means no single administrator can modify or disable systems undetected.
4. Secure The Virtual Network The virtual network has the same
problems as the physical one, including the potential for man-in-the-middle at- tacks, in which compromised VMs in- tercept traffic between other VMs. To prevent these, it’s important to take ad- vantage of the security features in your virtual switches. Most virtualization ven- dors let you set up VLANs that can seg- ment network devices and traffic based on security and management policies.
However, the virtual switch you have in place may lack advanced security and monitoring features of physical switches. For instance, a physical switch lets you create private VLANs, which allow for additional segmentation (think of a VLAN within a VLAN), but many virtual switches don’t support this feature. Vir- tual switches may also lack the ability to provide netflow data, which can be used
for performance monitoring and attack detection. Do your homework to see if you need a third-party virtual switch that provides these advanced capabilities. If you can implement only one of
these steps at the outset, focus on access control and separation of duties. Most companies have procedures and tools in place to control access to physical sys- tems, and these can be applied directly to virtual environments. Virtualization’s risks and challenges can be countered with common security practices.
Michael A. Davis is CEO of Savid Technologies. You can write to us at [email protected].
3.4
3.2
3.0
3.0
2.9
2.8
Data: InformationWeek 2010 Virtualization Security Survey of 684 business technology professionals, September 2010
Interception of communication from virtual desktops and their clients (man in the middle)
Creation of rogue or undocumented guest VMs
Penetration of a guest VM that contains confidential data
Penetration of the hypervisor management system
Access by an attacker to the hypervisor, which provides access to guest VMs
Infection or modification of VM images that affect future VMs
VM-to-VM infection or attack
3.1
3.0
(Mean average)
Oct. 17, 2011 47informationweek.com
R emember the notorious DLL Hell problem? About a decade ago, before the ad- vent of the .NET platform,
most applications were based on dy- namic link libraries, and multiple apps often shared DLLs. At that time, DLLs were identified primarily by name, and version info often wasn’t recorded properly. If a new application inadvertently re- placed a shared DLL component, other apps that used that component would break. The solution to this problem came with the .NET plat- form and Global Assembly Cache, which gives registered components strong names that contain details such
as version and author information. With this problem solved, develop- ers now are grappling with a new one: the third-party DLL Hell. In the beginning of the .NET era, applications used only a few commer- cial third-party libraries, mostly for user interface needs. The number of these components per application was low, and managers knew which ones they were because they had to pay for them. Today, many more high-quality, third-party components are available.
In addition to top commercial suites of user interface controls, you’ll find quite a few well-done, free open source libraries. Increasingly, .NET software projects are based on a variety of third-party components. On average, a project can use six or seven of them. For ex- ample, an ASP.NET MVC project may have an IoC framework, a mocking li- brary, ELMAH or something similar for error handling, perhaps the Mi- crosoft Enterprise Library, DotNe- tOpenAuth for authentication against social networks, NHibernate or some Entity Framework extensions, jQuery and friends, maybe some other facili- ties for mobile and HTML5 views and
client-side validation. The list goes on and on. There are two problems with third- party components. Companies must ensure that developers can only use an approved set of libraries. And develop- ers face the problem of repeatedly go- ing through the same (often cumber- some) configuration procedure for a given library—a frustrating and time- consuming operation. Referencing some of these third- party components isn’t as easy as ref- erencing an assembly. More often than
not, you must first download binaries to the development machine and then browse your hard disk in order to ref- erence them in the new project. Sometimes you must repeat the oper- ation for multiple assemblies. And sometimes you also have to enter changes in the configuration file of the .NET application and add ad hoc source files to the project. With few exceptions, referencing a third-party component is painful.
Free Package Manager NuGet is a free package manager application aimed at making it easier for developers to reference third- party libraries from within a Visual Studio project. NuGet provides a command line and GUI front end so packages can be quickly installed and uninstalled. It acts as a host for up- loaded packages and lets developers discover them by name and tag-based searches. Integrated in Visual Studio 2010, NuGet lets developers reference a com- monly used third-party library with one click. You open the package man- ager, scroll the list of available pack- ages, pick up the one you’re looking for, accept the license agreement (set by the package author, not Microsoft), and download. Each NuGet package contains files (sources, data, help, and binaries) to be added to the Visual Studio project and instructions for the manager. Once the download is complete, the
Read more about Windows development at drdobbs.com/windows
DLL Hell Redux One way around the latest dynamic link library problem By Dino Esposito
NuGet saves developers time and ensures that they only access an approved set of libraries.
48 Oct. 17, 2011 informationweek.com
package manager Xcopies the files and sets up the project as appropriate. This means, for example, that assem- blies in the package are added to the list of existing project references, and changes required to the configuration file are merged with the current con- figuration file. In a couple of clicks, you’re set. NuGet definitely saves time.
Vibrant .NET Community According to Phil Haack, one of the NuGet architects, the utility’s primary goal is to foster a vibrant .NET open source community by simplifying the way in which .NET developers share and use open source libraries. In this regard, NuGet was a great success. Launched less than a year ago, it’s one of the most compelling
open source projects. A large and growing number of .NET developers use it, and just the fact that adding a dependency is so easy represents a good reason to add more third-party components and libraries. (I think this is similar to what happened in the ’90s with Visual Basic and VBX components.) Companies often put strict bound- aries around which frameworks and libraries their developers can use, and they often designate a select group of people to approve the third- party components that can be used. Open access to NuGet’s repository doesn’t necessari ly break these policies. NuGet package managers read the feed of available libraries from a Mi- crosoft website where developers can
upload custom packages for others to use. And, unlike the Apple App Store and Windows Phone 7 marketplaces, there’s no approval workflow for sub- mitted packages. NuGet can be configured to point to a shared folder on the intranet, let- ting a company have its own local NuGet installation. Developers enjoy the benefits of NuGet, while man- agers ensure that developers access only an approved set of libraries. In addition, you no longer need to produce internal documentation on how to locate and install approved libraries.
Dino Esposito is the author of a num- ber of books about Microsoft pro - gramming technology. Write to us at [email protected].
[DR. DOBB’S M-DEV] WINDOWS 8
Every 10 years or so, Microsoft informs its programmer com- munity that it’s radically chang-
ing platforms. In the early 1990s, it moved developers from DOS-based APIs to Win32 by forcing them through a painful series of API sub- sets: Win16 to Win32s and Win32g to Win32. In the early 2000s came the push to migrate to .NET. Now comes a new migration: to Windows 8’s constellation of new technologies announced at the Build conference in September. In both the Win32 and .NET migra- tions, Microsoft supported backward compatibility. But by changing the UI, limiting updates, and providing dimin- ished support, Microsoft obliged com- panies to either rewrite their existing code or, at least, write future code for the preferred platforms. The costs of these past migrations have been enormous and continue to
accumulate, especially for sites that, for one reason or another, can’t migrate applications to the new platforms. (Those that can, of course, still bear the price of migration, but presumably this is a one-time cost.) At least the migration from DOS to Win32 had compelling motivators: a GUI interface and a 32-bit operating system. The migration from Win32 to .NET had a less obvious benefit: so- called “managed code,” which in theory eliminated a whole class of bugs and provided cross-language portability. It’s not clear that the first benefit war- ranted rewriting applications, nor that the second one created lasting value. The just-announced Windows 8 technologies are for writing “Metro” apps. (All previous software is now grouped into the legacy category of desktop apps.) Metro apps have a wholly new UI derived from Microsoft’s mobile offerings and intended to look
like kiosk software with brightly col- ored boxy buttons and no complex, messy features like dialog boxes. Metro UIs can be written in HTML5 and CSS, or by using an XML-like in- terface definition called XAML. UIs written in HTML5 and CSS run on a JavaScript engine; those using XAML talk directly to a new OS library, called Windows Runtime, principally using C/C++. Desktop apps can still be written in .NET and use Silverlight, for example, but Microsoft hasn’t put forth a long- term road map for these suddenly legacy technologies. It has strongly hinted, though, that the future will look more like Metro and that devel- opers must get on board. Given the lack of stated benefits, however, it’s hard to see how or why they should go with the new paradigm, or even why they should care.
—Andrew Binstock ([email protected])
Windows 8 And Developer Madness Get ready for another costly migration
50 Oct. 17, 2011 informationweek.com
S erver virtualization has been a boon for IT, but it creates chal- lenges for the network. Virtual- ization reduces the number of
physical servers, but it snowballs the number of virtual and network de- vices. That causes networking issues because, from a switching perspective, there’s little difference between a vir- tual network port and a physical one.
This paradox of server simplicity vs. network complexity is analogous to what would happen if thousands of commuters gave up their individual cars for shared minivans, with each passenger going to a different destina- tion. While this would reduce the number of vehicles on the freeway, it doesn’t reduce the number of trips— the driver still must crisscross town dropping off passengers at their offices.
Ride sharing also complicates the routing calculus. Instead of each com- muter finding the quickest path between home and office, the van driver must op- timize the pickup and delivery schedule to minimize drive time and distance.
Likewise, many of the switching problems that come up with virtualiza- tion have to do with performance and management complexity.
For example, aside from merely in- creasing the number of network de- vices, virtualization adds tiers to the switching fabric, increasing latency, power consumption, and complexity. The consolidation of virtual machines on physical servers also affects switch- ing scalability and performance. A hy- pervisor virtual switch with a workload of 10 to 15 VMs per system extracts a modest overhead of about 10% to 15%,
but that figure that will undoubtedly increase when handling scores of VMs.
Other problems include management and security complications. As more traffic is switched within the hypervi- sor, traditional network monitoring and security tools lose visibility into a sig- nificant amount of network activity.
Problem Solvers Two new IEEE standards projects
aim to help with these and other prob- lems. Both are amendments to the base IEEE 802.1Q VLAN tagging standard.
The more mature project is 802.1Qbg Edge Virtual Bridging. It’s designed to let multiple VMs share a common port while obtaining services from an exter- nal bridge (that is, an edge switch acting as a reflective relay). Normally, Ethernet frames aren’t forwarded back out of the same interface they came in on. This ac- tion, called hairpinning, causes a loop in the network at the port. EVB provides a standard way to solve hairpinning. It’s a simple protocol extension that can be implemented on existing hardware with a software upgrade to the switch and hypervisor.
Meanwhile, the 802.1Qbh Bridge Port Extension project tackles policy manage- ment. The Qbh port extension standard adds a tag, much like standard VLAN tags, allowing network flows to be mapped to specific VMs and followed as those VMs move about the network.
New technology and standards are emerging to address many of the issues raised by virtualization’s impact on the network, but companies must ensure that virtualization’s benefits in one sec- tor don’t turn into problems in another.
While the journey toward a highly vir- tualized infrastructure will be long, and at times arduous, the result will bring the enterprise to new levels of performance, reliability, agility, and efficiency.
Kurt Marko is a 15-year IT veteran. Write to us at [email protected].
Virtualization Vs. The Network Server VMs create new switching and security problems By Kurt Marko
[NETWORKING]
Get This And All Our Reports
Our full report on networking and virtualization is free with registration. Download it at informationweek.com/reports/netvirt
This report includes 16 pages of action-oriented analysis and 6 illustrative charts.
What you’ll find:
> Insight into solutions to these problems
> A detailed discussion of forthcoming standards
Amid continued economic upheaval, is the U.S. losing its lead when it comes to IT innovation? Not according to the
Economist Intelligence Unit, whose 2011 IT Industry Competitiveness Index shows the U.S. extending its lead over the 65 other countries in its biennial ranking. The EIU study explores the proficiency of
countries in six categories: overall business en- vironment, IT infrastructure, human capital, R&D environment, legal envi