mi modern infrastructure - bitpipedocs.media.bitpipe.com/io_11x/io_118745/item... · don’t forget...
TRANSCRIPT
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
Citrix Synergy and Modern Infrastructure Decisions Summit
The Inside Story On Cloud Migration
Give apps the VIP treatment when moving to the cloud.
EDITOR’S LETTER
Put the Fun in Dysfunctional
DATA
Survey Says
IT INFRASTRUCTURE
Getting to DevOps
VIEWPOINT
The Kids Are Plugged In
MIModern InfrastructureCreating tomorrow’s data centers
SEPTEMBER 2014, VOL. 3, NO. 8
FIRST LOOK
Is Docker Right for You?
OVERHEARD
BriForum 2014
NETWORKING
The Case for Leaf-Spine
END-USER ADVOCATE
Nonpersistent Desktops Revisited
THE NEXT BIG THING
IT: Tear Down This Wall!
IN THE MIX
Failure Is Sometimes the Best Option
MODERN INFRASTRUCTURE • SEPTEMBER 2014 2
MAYBE IT’S JUST me, but this month’s issue left me feeling that something is rotten in the state of IT. Several articles paint IT as a profoundly dysfunctional place to work, where employees are ruled by fear, avoidance and inertia.
Bob Plankers’ column “Failure Is Sometimes the Best Option” describes his experience with several grim IT workplaces. “Some organizations handle failure extremely poorly, with managers roaming the halls screaming at peo-ple and firing them in the middle of the outage,” he writes. Then there are those “whose employees are so scared of being blamed for anything that they won’t even apply desperately needed security patches to their systems,” and employees that “just freeze up when they encounter failure.”
Specialization and siloization, to coin a term, have also wreaked havoc, writes Taneja Group analyst Mike Matchett in “IT: Tear Down This Wall!” He writes, “When IT is organized in silos, anytime there is a problem—
troubleshooting application performance, competing for rack space, or allocating a limited budget—the resulting infighting, finger-pointing and political posturing just serves to waste valuable time and money.” Further, Bal-kanizing IT and the resulting separate fiefdoms is bad for the business as a whole: “For someone outside IT, having to navigate a byzantine IT organization just to try out new things can completely stifle business creativity and inno-vation,” Matchett writes.
There are models out there that point to a better way, namely Agile and DevOps. As I find in my article “Get-ting to DevOps,” single-source-of-truth monitoring tools can reduce finger pointing and blame in IT, said Abner Germanow, New Relic senior director for enterprise mar-keting, a software analytics company. Having a dashboard “helps move from a culture where ‘George is a jerk because he deployed that code,’ to ‘that code that George deployed isn’t working so let’s fix it.’”
If you don’t believe me, trust Alain Gaeremynck, en-terprise architect at Yellow Pages Group, Canada, who witnessed the transition to DevOps a couple of years ago. “At first it was painful for people, because … people are resistant to change,” he said. Today, though, “It’s going really well. The overall spirit at the office is pretty good. We’re involved in a lot of new projects. It’s fun.” n
ALEX BARRETT, Editor in Chief
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
EDITOR’S LETTER
Put the Fun in Dysfunctional
Cloud Migration Confidential
Take it from the experts: Migrating an app to the cloud takes planning and forethought.
BY BETH PARISEAU
MODERN INFRASTRUCTURE • SEPTEMBER 2014 3
APPLICATION MIGRATION CAN be daunting when IT pros first look to make their way to the public cloud, but it doesn’t have to be, according to experts with experience moving workloads on behalf of clients. Whether you’re a novice cloud user or one of the enterprise clouderati, some rules of thumb apply when moving apps from being hosted on premises to a cloud service provider’s data center.
It’s important to note that at least half the battle lies in preparation for running in the cloud, from assessing applications to evaluating internal operations processes. After that, a phased, stepwise approach to cloud migration is strongly recommended.
GET READY: ASSESS APPLICATIONS FIRST
Before migrating, IT professionals must assess applica-tions and understand their business objectives for the cloud. Will the cloud host production or test and dev ap-plications? And is there a variable demand for resources within those apps that can take advantage of flexible cloud infrastructure?
“There are some apps that should not be moved based on their network needs and their [dependencies],” said Robert Green, principal cloud strategist at Enfinitum Inc., a consulting company in San Antonio.
CLOUD COMPUTING
HOMESOBERP/THINKSTOCK
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
MODERN INFRASTRUCTURE • SEPTEMBER 2014 4
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
For example, IT shops should “assess whether an appli-cation has ties into local file stores,” Green said. “If you’re not going to move the file store, then don’t move the app.”
Understanding app dependencies can require a pains-taking deconstruction of a tangled IT web, but that work pays off in the end, Green said. Avoiding it can be expen-sive. An Enfinitum client recently attempted a fast “cut over” to a public cloud and didn’t recognize the presence of several on-premises firewalls that filtered Web traffic. Those firewalls quickly became overwhelmed when 400 users tried to traverse the Web to the public cloud after the migration, resulting in a $10 million loss for the business when employees couldn’t get online for eight hours.
Keep in mind that most applications will also need some modifications to deal with being housed in the public cloud.
“Apps for data center operations traditionally work under the assumption that they’re running on a reliable piece of hardware that’s never going to go away,” said Eric Dynowski, CEO of Turing Group, a cloud consultancy in Chicago. “In the cloud space, resources are ephemeral, and being able to deal with this transience at the app level is key.”
Don’t forget policy when assessing apps, either, experts warn. Compliance with regulatory requirements and
other nontechnical issues can often make or break an app’s readiness for the cloud.
“At the end of the day, most applications that are running on Linux or Windows can run in the cloud and typically not face huge amounts of errors or issues,” said John Treadway, senior vice president at Cloud Technology Partners Inc. in Boston. “The more interesting things around what can’t work have to do with business policy.”
Prospective public cloud customers in highly regulated industries such as healthcare and finance should check not only with internal information security teams, but also with outside auditors before moving an app to the cloud, Treadway said.
CHOOSE THE RIGHT CLOUD AND CONNECTION
IT pros working on a cloud migration should also assess whether software as a service (SaaS), in which all ele-ments of the application are delivered from a cloud pro-vider’s data center, or infrastructure as a service (IaaS), in which only the underlying hardware infrastructure is provided, is more appropriate for a particular application.
Before migrating apps to an IaaS cloud, consider whether SaaS might be a better fit, said Glenn Grant, CEO of G2 Technology Group Inc., an Amazon partner
Prep work and smart decisions are the name of the game for cloud migrations n Applications will
almost always need finessing to go cloudward n IaaS, SaaS, bandwidth and DR all play a part in cloud
migrations—consider them carefully
HIGHLIGHTS
MODERN INFRASTRUCTURE • SEPTEMBER 2014 5
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
in Boston.“We look to see if there are applications that can just
be changed,” Grant said. “If a customer has Microsoft Exchange, and they’re really married to it and they have a great use case for it, we’ll stand up their own private Microsoft Exchange server in an [Amazon Virtual Private Cloud]. But in some cases ... it just does email, so we say, ‘Great, in that case, you should consider Office 365, which is still Exchange, or Google Apps, because you’re not le-veraging any features that require the extra overhead of having your own server.’”
Experts also strongly recommend checking your band-width before moving an app to the cloud. Grant identifies bandwidth restrictions as the No. 1 pitfall his clients run into when migrating to the public cloud.
“For us, a regular gotcha has been being surprised or inconvenienced by the time it takes the data to get from point A to point B, which can throw off project schedules,” he said.
BRIDGE THE OPERATIONAL EXPERIENCE GAP
Another consideration is the operational culture and in-stitutional mind-set that exists within your organization. Does your IT operations team have the skills to manage your applications in the public cloud? That’s an important question to ask before you move, experts say.
“The biggest change, and the one that quite frankly is what drives the pace of cloud adoption, is the operational change,” Treadway said. “Technically, moving a workload to Amazon isn’t all that difficult. ... But locking it down,
operating it, managing it, making sure that you’ve got a clear plan for how to deal with when things go wrong—that’s all new, and a lot of operations teams are so busy fighting day-to-day fires and keeping the lights on in the data center [that] they can’t actually do it.”
In response, IT organizations may want to bring in a managed service provider to run apps in the cloud in the early days while the IT operations team gets its sea legs.
Right-sizing applications is also key to keeping costs manageable in the public cloud, according to Green. IT pros working in data centers with physical hardware are more used to overprovisioning to ensure performance and availability, which is another mind-set that has to change.
“You’ve got to get the understanding of right-sizing and only taking what resources your app really needs, and having a plan to scale out in the event that your load goes up,” Green said.
“ A REGULAR GOTCHA HAS BEEN BEING SURPRISED OR INCONVENIENCED BY THE TIME IT TAKES THE DATA TO GET FROM POINT A TO POINT B, WHICH CAN THROW OFF PROJECT SCHEDULES.”—Glenn Grant, CEO of G2 Technology Group Inc.
MODERN INFRASTRUCTURE • SEPTEMBER 2014 6
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
GAME TIME: DURING AND AFTER CLOUD MIGRATION
While you may need to change some applications before migration to accommodate the cloud, it is generally a mistake to try to completely automate apps as you migrate them, experts caution. If at all possible, leave an app intact until it is successfully moved into the public cloud. Then, customization work can begin to take advantage of a par-ticular public cloud’s features.
“I always do a phased approach,” Green said. “You move each application over there, and you test the heck out of it. Will it work, does it function the way I think it will function, does it have the performance characteristics that you expect it to have?”
Testing is the best way to uncover application depen-dencies you might not have been aware of, Green added.
Use the design created in the on-premises data center at first, experts recommend. Once applications and their data have moved over, create a “sandbox” environment where apps can be modified and decomposed to take ad-vantage of features such as autoscaling.
“You may or may not be able to move straight to an au-tomated cloud infrastructure,” said Mark Szynaka, cloud architect at New York-based Cloud eBroker. “There’s going to be a two- or three-step process where you mimic what you have in the enterprise even though it’s not as efficient as it should be, but it’s more familiar.”
Neither the testing nor app modification should be rushed, experts warn, and don’t cave to pressure to do things that are unsound.
“Be the resounding voice of reason versus saying, ‘Yes, I’ll do that for you,’” Green said. “The IT folks who are doing the migration ... need to learn how to articulate the risks and communicate them to the business.”
Finally, an often-overlooked aspect of cloud migration is ensuring application resiliency and disaster recovery (DR) in the cloud.
“DR is not included,” Green said. “Your service provider might have an SLA [service-level agreement], but you’ve got to read the fine print and understand the implications that SLA has for your application. And if you can’t live within that ... time frame, you’ve got to architect a way around it. ... You’ve got to plan that out and incorporate that into your cost structure.” n
BETH PARISEAU is senior news writer for SearchAWS. Write to her at [email protected] or follow @PariseauTT on Twitter.
“ I ALWAYS DO A PHASED APPROACH. YOU MOVE EACH APPLICATION OVER THERE, AND YOU TEST THE HECK OUT OF IT.”—Robert Green, principal cloud strategist at Enfinitum Inc.
w
D What are the top two obstacles to your adoption of cloud computing?
N=248; SOURCE: TECHTARGET CLOUD INFRASTRUCTURE RESEARCH SURVEY, 2Q 2014 N=248; SOURCE: TECHTARGET CLOUD INFRASTRUCTURE RESEARCH SURVEY, 2Q 2014
MODERN INFRASTRUCTURE • SEPTEMBER 2014 7
D Have cloud vendors addressed your security concerns
in the past year?
Survey SaysIt’s All About the CloudHome
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
Not enough control over the environment
Not enough security in the environment
Too much capital already invested in internal IT
We are not virtualized enough to implement cloud computing
Does not offer adequate benefits for our organization
A virtualized environment is enough. We do not need the cloud
Other
35%
34%
30%
17%
16%
15%
13%
28% My security questions have been addressed
27% I don’t know
24% It’s about the same as it was in 2013
17% I’ve heard the right things, but I haven’t seen real progress
4% Cloud security issues have gotten worse in the past year
SOURCE: TECHTARGET CLOUD INFRASTRUCTURE RESEARCH SURVEY, 2Q 2014
52+48Percentage of people who think “cloud architect”
and “cloud administrator” will emerge as new
roles in IT
48
MODERN INFRASTRUCTURE • SEPTEMBER 2014 8
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
WORKERS EXPECT MORE out of their workplace today than ever before. Why? Technology, of course.
The biggest proponents of using consumer technology in the enterprise are Millennials (myself included), who now make up a huge part of the workforce.
“[The trend] is definitely being driven by younger gen-erations coming in who are assuming that they are going to be using their devices in the business for work,” said Eric Klein, a senior analyst at VDC Research Group Inc.
They’ve grown up with IT as part of their daily lives. I, for one, grew up with a flip phone (remember those?) in my junior high backpack and joined Facebook back when “poking” was a thing. My phone had T-9 for texting, and no
one thought much about the security of what they posted on social networks. So, it has been a no-brainer for me to use my various devices, cloud collaboration and storage in the workplace.
As companies try to figure out how to offer consumer tech for corporate use, employees often go around the bounds of IT and choose their own software. I’ve accessed Google Drive, OneDrive and Dropbox for work—all with-out checking with IT. Organizations would do well to pick a business-level cloud storage product and instruct all employees to use it, making things more consistent across the company and easier for IT to control.
When it comes to mobile device use, I think most Mil-lennials just want free rein. These are the devices we use all day for checking email, consuming news, Instagram-ming and more. If IT wants to lock that device down, it becomes a problem. But, of course IT needs to maintain some kind of security. As a user, I want those measures to be nearly invisible. Sandboxing applications is one method, but the experience isn’t seamless yet in a lot of the dual-persona products out there.
IT’S GENERATION GAP
What makes things even trickier is that the generation gap also tends to exist within IT departments. There are the veteran employees who have been in the business a long
VIEWPOINT
The Kids Are Plugged InMillennials are pushing the pace of technology adoption—as users and as IT employees. BY ALYSSA WOOD
MODERN INFRASTRUCTURE • SEPTEMBER 2014 9
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
time; they know the ins and outs of IT, and they’ve been around for a lot of changes in the industry.
Then there are the fresh-faced new guys, often those pesky Millennials, who may have less experience in the field but are often very knowledgeable. Both groups have IT expertise, but each one approaches new technology in a different way.
With the consumerization of IT, there’s a slew of new things for IT staffers to consider: security tools for pro-tecting corporate apps on personal devices, networking technology for speeding up that connection, virtualization software for streaming apps successfully to new devices and more. When it comes to adopting these technologies, younger IT employees may be quicker to jump on board because they’re used to mobile devices and social tech as an extension of themselves.
But not all of IT is so ready to scramble aboard. Older IT workers tend to push back more against those technol-ogies for corporate use. It’s not that they turn a blind eye; they’re just more likely than younger IT employees to take a longer and harder look at the tech before adopting it.
“They’re taking more of a deliberate approach and not getting as excited about the technology, but really making sure that it is the right technology for that business,” Klein said.
They may be the voice of reason, though, if the orga-nization gets caught up in the hype around new tech like bring your own device. Older IT workers can step in and take execs down a notch, explain the benefits and chal-lenges of BYOD—and even offer an alternative.
The COPE model is a popular alternative that some in
IT might be more comfortable with, Klein said. I don’t like the idea of carrying around two devices, dealing with two sets of email and having to learn two device systems. Still, it gives the IT department a little more flexibility and control—something everyone in IT can get on board with.
The thing is, it’s the younger side that’s growing. A large swath of my peers has joined the ranks of IT since college. More private universities are adding IT tracks to their curriculums, and increasing numbers of online schools are making it easier to get IT expertise as a career changer or for add-on education. Vendors add specialized certifications every day, making the IT landscape more competitive and regulated.
With more and more Millennials getting into those jobs—and all of us users attached to our devices—enter-prises are looking younger by the day. And it’s the task of IT to keep up. n
ALYSSA WOOD is an IT editor and reporter with experience covering server virtualization, desktop virtualization and VDI, data center technologies and consumerization of the enterprise. Contact her at [email protected].
THEN THERE ARE THE FRESH-FACED NEW GUYS, OFTEN THOSE PESKY MILL- ENNIALS, WHO ARE OFTEN VERY KNOWLEDGEABLE.
MODERN INFRASTRUCTURE • SEPTEMBER 2014 10
CAN AN ENTERPRISE IT shop successfully adopt DevOps? Many IT professionals working in traditional, siloed IT environments would love to find out, in the name of the faster and more reliable systems that the DevOps conceptpromises. DevOps shops test and deploy new features and applications much quicker than their peers, while those developers’ hands-on stance toward production operations encourages them to write higher quality code in the first place.
But where do they even start? Most attempts to define DevOps tend toward the theoretical. They lay forth the guiding principles and philosophy behind DevOps cul-ture: rapid development, frequent testing, automation, collaboration. But they stop short of painting a concrete picture of what a DevOps shop actually looks like and how it runs. Who knows? It may turn out that an IT shop is just a few processes and organizational tweaks away from claiming DevOps shop status.
The reality is that there are many common threads, skills and tools that you can find across a variety of DevOps shops. Implementing some or all of them may put you on the road to improving time-to-code delivery and creating more resilient systems.
Getting to DevOpsNo two DevOps shops are alike, but there are common threads that run throughout.
BY ALEX BARRETT
HOME
IT INFRASTRUCTURE
MARK AIRS/THINKSTOCK
MODERN INFRASTRUCTURE • SEPTEMBER 2014 11
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
THE DEV SIDE OF THE COIN
Not for nothing, the first part of the word DevOps is de-velopment. So it should come as no surprise that a corner-stone of a DevOps environment is its chosen development methodology. In the case of DevOps, that means some version of Agile. “It wouldn’t make sense to have DevOps without Agile,” said Evan Powell, CEO at DevOps startup StackStorm. “If you can’t rapidly deploy code, it doesn’t matter if you can operate it agilely.”
Thus, for many shops, the first step is to pick an Ag-ile-friendly development methodology, typically Scrum or Kanban, which helps software development teams define goals, prioritize and assign tasks, and identify where in the development process problems are occurring.
Another mainstay of DevOps is continuous integration (CI) and continuous delivery and/or continuous deploy-ment (CD). In a nutshell, continuous integration is about continuously and automatically running tests against a code branch, whereas CD automates the process of getting code into production after it’s been tested and approved.
“In the old days, you used to release code into pro-duction at set times and off-peak hours,” said Brian Doll, vice president of strategy at GitHub Inc., which makes a source code repository platform. But that model of older orchestrated release cycles is “the complete antithesis of DevOps,” he said. “The goal is to automate the release cycle.”
The code itself is held in a source code repository for safekeeping and version control. These days, that repos-itory is more often than not based on the open source git—for example, GitHub or Atlassian BitBucket.
Team DevOpsHAVING “DEVOPS” IN your title doesn’t mean you’re
a developer who learned ops, or you’re an ops
guy who learned to code. Rather, DevOps usually
refers to a member of a cross-functional team,
“where everyone is responsible for everything,”
said Paul Biggar, founder at CircleCI, a continuous
integration and deployment provider.
In practice, organizations don’t make a whole-
sale shift to becoming DevOps shops, said Abner
Germanow, senior director of enterprise strategy
at New Relic Inc. In the enterprise, it’s unusual to
find an organization that describes itself as an all-
out DevOps shop. Rather, you tend to find DevOps
teams aligned with specific applications, such as
the mobile team or the checkout crew, he said.
At open-source software provider Red Hat Inc.,
an internal DevOps project called Team Inception
took the following shape: a team leader, a product
owner and Scrum master, and four engineers with
systems administration, information security, de-
velopment and release engineering skills.
“It actually worked out that every person had
at least two of those skills on them, so there
was enough crossover that we were able to very
quickly work together on stuff,” said Steve Milner,
a Team Inception member. n
MODERN INFRASTRUCTURE • SEPTEMBER 2014 12
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
Git-based code repositories have largely overtaken older version control systems such as CVS or Apache Subver-sion. Initially written by Linux developer Linus Torvalds himself, git was a response to a need in the open-source community for a decentralized versioning system that could cater to globally dispersed development teams.
“It turns out that a decentralized tool works well in the enterprise, too, where teams can be large, global and
loosely coupled,” Doll said. Commercial versions of git add collaboration and policy-based approvals and workflows, plus they work with popular integrated development en-vironments, CI/CD and testing tools.
INFRASTRUCTURE AS CODE
Software code isn’t the only thing stored in repositories these days. Increasingly, repositories also store detailed configuration scripts and templates created with config-uration management tools like Puppet and Chef. In fact, Puppet and Chef are two of the most popular languages in GitHub repositories, Doll said.
Creating automated ways to configure and deploy in-frastructure has given rise to the idea of “infrastructure as code.” Take Rally Software Development Corp., an Ag-ile-inspired project management software provider that has spent the past year and a half creating Chef recipes for its core services, spread across 60 VMware hosts as well as AWS instances.
“Before, everything was configured largely by hand, but that’s difficult when you have to scale rapidly,” said Jonathan Chauncey, a software engineer at the firm. “It also caused problems when debugging problems were widespread across a stack of servers.”
Rally’s engineers already had strong Ruby programming skills, so the company selected Chef as its configuration management tool, giving Rally a consistent, repeatable way to install software, Chauncey said. In fact, Chauncey makes the case to write templates for all your systems—not just scalable Web services and microservices.
Defining DevOps
u AGILE is an established software development
methodology. The tenets of Agile are simple
code, frequent testing and delivering pieces of
the application as they are ready, instead of de-
livering an entire finished application at the proj-
ect’s end. Because of its simplicity and flexibility,
Scrum is the most popular way of introducing
Agile in an organization.SOURCES: WHATIS.COM, AGILEMETHODOLOGY.ORG
u KANBAN is a technique for managing software
development, and gets its name from Toyota’s
“just in time” scheduling and production system.
It aims to identify and reduce bottlenecks in the
software development pipeline to improve qual-
ity and time to deployment.SOURCE: HTTP://KANBANBLOG.COM/EXPLAINED/
MODERN INFRASTRUCTURE • SEPTEMBER 2014 13
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
“Infrastructure shouldn’t be sentimental,” he said. “If your infrastructure dies at 2 a.m. and you need to rebuild it, do you really trust your team to do it right in the middle of the night?”
More to the point, having infrastructure as code means that it can be incorporated into other DevOps processes, namely testing and deployment. Rally stores all its infra-structure recipes in its GitHub repository, which are tested and deployed as part of the same continuous integration and deployment processes that it runs dozens of times per day to deliver software features. “Regardless of what they are changing—software code or infrastructure—it goes through the same process,” Chauncey said.
Another benefit of infrastructure as code is you can always have evergreen systems, because it’s easy to keep systems up to date with the latest versions and packages, said Alain Gaeremynck, enterprise architect at Yellow Pages Group, Canada, and a DevOps manager.
“We subscribe to infrastructure as code and ‘disposable infrastructure’ concepts,” Gaeremynck said. “Instead of building infrastructure once and mindfully monitoring and maintaining it, we just destroy and rebuild it every time [we put out a new release].” As part of a build process, “we can easily sneak in updates to Java or the latest OS, for example, since it’s going to go through the QA cycle anyway.”
THE CLOUD CONNECTION
What, if anything, does DevOps have to do with cloud? For some, having a cloud infrastructure that you can
request and provision resources to and from is integral to DevOps.
“For me, a prerequisite of DevOps is the ability to consume resources as you go and to detach the infra-structure for the central service,” said Ram Akuka, di-rector of DevOps at Deutsche Telekom HBS, a provider of telephony services aimed at small and medium-sized businesses.
That cloud doesn’t need to be Amazon Web Services—or even public. While Deutsche uses AWS for development resources, it also built an internal private cloud based on Citrix CloudStack and VMware for its production en-vironment. It cobbles that environment together using Jenkins for Continuous Integration, homegrown scripts, Chef recipes stored in GitHub, and services from Ravello Systems to create developer sandboxes of production on AWS. “It works for us,” Akuka said.
However, enterprises attempting to pivot to DevOps struggle with legacy infrastructure, which doesn’t always play nice with modern infrastructure automation tools, much less private cloud management stacks.
Some infrastructure automation players such as Qual-iSystems or CFEngine tout their ability to handle legacy infrastructure that the likes of OpenStack don’t yet sup-port—if they ever will. “CFEngine can automate anything that can take an embedded agent,” said Mahesh Kumar, CFEngine vice president of marketing. “We may have to compile an agent for that platform because we don’t have it, but it can be modified to work.”
StackStorm’s Powell said he has seen some shops de-velop clever workarounds for legacy infrastructure, such
MODERN INFRASTRUCTURE • SEPTEMBER 2014 14
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
as writing an API layer to translate specific actions. “Then, if you have old EMC stuff with limited bindings, it doesn’t really matter. They’re not throwing out the legacy stuff, they’re putting lipstick on it to make it more DevOps-friendly,” he said.
GETTING ON THE SAME PAGE
When developers and operators come together—or are one and the same as part of functional teams—they need to have a common monitoring and reporting mechanism against which they all can work.
DevOps shops or teams tend to come together “in war-time, when things have broken really badly,” said Abner Germanow, senior director for enterprise marketing at New Relic Inc., a provider of application performance monitoring or, in peacetime, in response to an urgent business request.
“Let’s say the CMO has just requested a new loca-tion-based social-enabled mobile shopping app. It’s gotta happen fast, and it’s gotta happen now,” he said.
Those scenarios call for the creation of a “tiger team”—experts from multiple disciplines that are tasked with quickly and iteratively building or fixing an application. That process improves dramatically with a single source of truth, i.e., a dashboard.
Dashboards help to reduce finger-pointing, Germanow said. “When you talk to people who came out of war rooms, they’ll say they went in thinking that it was the server, but the data said otherwise,” Germanow said. Hav-ing a dashboard also “helps move from a culture where, ‘George is a jerk because he deployed that code,’ to, ‘That code that George deployed isn’t working so let’s fix it.’”
Those views don’t need to be limited to IT staffers, Yel-low Pages’ Gaeremynck said. They’re useful for business users, too, as a way to evaluate customer experience or to identify popular features.
But on a more pragmatic level, having both developers and operators being able to access the same New Relic dashboard allows Yellow Pages to troubleshoot and test against production, he said.
“No matter how good you are, you’ll never be able to fully mimic production,” Gaeremynck said. When Yellow
Defining DevOps
u CONTINUOUS INTEGRATION (CI) tests and reports
on isolated changes when they are added to a
larger code base. The outcome of using CI is that
defects are usually smaller and easier to resolve,
since a defect in the code base can be found and
fixed quickly.SOURCE: WHATIS.COM
u CONTINUOUS DELIVERY (CD), similar to CI, it au-
tomatically tests each code commit as it’s added
to the larger code base. There are usually several
tests in a CD system, and code changes move to a
pre-deployment staging area after they’re tested.SOURCE: WHATIS.COM
MODERN INFRASTRUCTURE • SEPTEMBER 2014 15
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
Pages experiences a slowdown, developers launch a profil-ing session against their production systems and examine which parts of the code are taking the longest to execute.
TYING IT ALL TOGETHER
Assembling the major elements of a DevOps environ-ment—agile, continuous integration and delivery, source code versioning, infrastructure as code, unified views—is relatively easily done in a small startup, but enterprises may have a harder time.
“With DevOps, there’s a lot of talk of silos, and break-ing thereof, because they’re bad. And that’s fine if you’re in a small company with 23 people. But what if you have 23,000 people?” said Dave Zweiback, a former Wall Street IT professional who is now vice president of engineering at Next Big Sound Inc., an analytics company for the music industry. Then there’s the reality that most IT professionals don’t have the cross-functional skills prized in DevOps environments. “Most devs don’t have an un-derstanding of the underlying infrastructure, and most admins don’t code,” Zweiback said.
Still, DevOps in the enterprise need not be a pipe dream, said Justin Arbuckle, chief enterprise architect at Chef and formerly chief architect at GE Capital. There, Arbuckle spearheaded a DevOps initiative to create con-sistent infrastructure builds across the organization. From that experience, he identified a few key things for enter-prises to consider when thinking about DevOps.
n Start with a project that everyone can see, and that
is necessary. “It is important to solve a non-trivial problem,” he said.
n Follow the Agile principle of “everybody all together from early on.” Assemble a cross-business, cross-func-tional team consisting of operations professionals, application developers, infrastructure architects—even auditors. If successful, those team members will become apostles of sorts, and proselytize DevOps throughout the organization, Arbuckle said.
n Include infrastructure as part of your continuous delivery process.
n Commit to elastic resources, cloud or otherwise. “You want your resources to be programmatically accessi-ble,” he said.
Organizations that follow that advice still run into chal-lenges—the “operational chasm” that exists between old ways and new ways of doing things can persist, Arbuckle said—but it’s a start.
About three years into a DevOps reorganization, Yellow Pages’ Gaeremynck says the difficulties are worth it.
“At first it was painful for people, because the [old] soft-ware development lifecycle had been so much longer, and people are resistant to change,” he said. But, overall, “it’s going really well. The overall spirit at the office is pretty good. We’re involved in a lot of new projects. It’s fun.” n
ALEX BARRETT is editor in chief of Modern Infrastructure.
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
MODERN INFRASTRUCTURE • SEPTEMBER 2014 16
Overheard at BriForum 2014
“ Apps and data are like peas and carrots. I don’t like either of them, but they go together.”GUNNAR BERGER, CTO of desktops and apps at Citrix, discussing Workspace Services
“ The best thin client is the PC you already own.”STEVE GREENBERG, president of Thin Client Computing
“ The cloud is confusing as hell.”ELIAS KHNASER, CTO of Sigma Solutions
“ Even if you’re a dyed-in-the-wool Citrix customer, call your VMware rep and get them to fight [on pricing]. That’s kind of what we do, right?”BRIAN MADDEN, in his keynote address
“ For a long time, Microsoft denied that there were other platforms out there besides Windows.”BENNY TRITSCH, CTO, bluecue consulting
“ Chances are, most of the stuff on your device does not matter to anybody.”BRIAN KATZ, director, head of mobility engineering at Sanofi
MODERN INFRASTRUCTURE • SEPTEMBER 2014 17
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
DOCKER CONTAINER TECHNOLOGY has taken the cloud and application development world by storm since it was open-sourced a little over a year ago, offering a low-over-head way to package and deploy applications across a va-riety of Linux instances. VMware took notice, announcing at VMworld 2014 its partnership with Docker, Google and Pivotal to integrate containers and virtualization. But enterprise IT, where traditional server virtualization is pervasive and entrenched, has little use for the technology. Or does it?
Proponents maintain that Docker and its underlying Linux Containers (LXC) technology suffer from much less CPU and storage overhead than traditional hypervisors,
and therefore provide better performance and greater consolidation. Boden Russell, an advisory software engi-neer with IBM Global Services, benchmarked OpenStack running on KVM against Docker and LXC and found that Docker either outperformed KVM by a wide margin or was at least comparable.
That led him to conclude that “traditional VMs will become the ‘edge case’ moving forward.”
While some enterprises could replace existing virtualization technology with Docker, a more likely scenario is that they will use it to augment what they al-ready have, said Scott Johnston, Docker’s senior vice president of products. They could run Docker alongside a VMware environment, for example, or deploy Docker con-tainers within a VMware VM, to maintain management consistency.
“Enterprises are interested in the agility aspects of Docker, but they’re really interested in compressing their data center footprint and reduced licensing,” he said. Tests like Russell’s suggest that it’s possible to run 10 Linux con-tainers in a single VM. “Where you had 10 VMs before, you could have 100 containers.”
But the strength of Docker—and containers in
FIRST LOOK
Is Docker Right for You?There’s a case to be made for containers instead of—or alongside of—server virtualization. BY ALEX BARRETT
MODERN INFRASTRUCTURE • SEPTEMBER 2014 18
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
general—is its weakness, especially in heterogeneous enterprise environments. Docker’s low overhead and reduced storage footprint comes from not having to include a copy of the operating system for each contain-erized application. In turn, that limits Docker to Linux instances that support LXC. In other words, if you want to use Docker to containerize a Windows application, you’re out of luck.
Other compelling use cases for traditional virtual-ization, Russell writes, are instances where LXC is not supported on the host and cases where “the VM requires a unique kernel setup which is not applicable to other VMs on the host.” So while Docker is fast, easy to use and largely free, it’s not right for everyone. n
ALEX BARRETT is editor in chief of Modern Infrastructure.
MODERN INFRASTRUCTURE • SEPTEMBER 2014 19
THREE-LAYER DESIGNS are falling out of favor in modern data center networks, despite their ubiquity and familiarity. In their stead? Leaf-spine topologies.
As organizations seek to maximize both the utility and use of their respective data centers, there’s been increased scrutiny of mainstream network topologies. In this in-stance, “topology” refers to the way in which network devices are interconnected and form a pathway that hosts follow to communicate with each other. For many years, the standard network topology has been a three-layer architecture: the access layer, where hosts connect to the network; the aggregation layer, where access switches interconnect; and the core, where aggregation switches interconnect to each other and to networks outside of the data center.
This model continues to be successful because the design provides a predictable foundation for a data cen-ter network. Physically scaling the three-layer model is a matter of identifying port density requirements and purchasing an appropriate number of switches for each layer. Structured cabling requirements are also predict-able, as interconnecting between layers is done the same way across the data center. Therefore, growing a three-layer network is as simple as ordering more switches
The Case for Leaf-Spine
Virtualization and consolidation have forced a wholesale shift in data center
networking topologies.BY ETHAN BANKS
HOME
NETWORKING
VERA KUTTELVASEROVA/THINKSTOCK
MODERN INFRASTRUCTURE • SEPTEMBER 2014 20
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
and running more cable against well-known capital and operational cost numbers. In short, the three-layer model is comfortable for many networking professionals.
WHY THREE-LAYER FALLS SHORT
Yet there are many reasons that network architects ex-plore new topologies. Perhaps the most significant is the change in data center traffic patterns. Traditionally, most network traffic has moved along a north-south line, which means hosts are communicating with hosts
in another segment of the network. North-south traffic flows down the tree for routing service, and then back up the tree to reach its destination. Meanwhile, hosts within the same network segment are usually connected to the same switch, which keeps their traffic off of the network interconnection points.
However, in modern data centers, compute and storage infrastructures’ alterations have changed the predominant network traffic patterns from north-south to east-west. In east-west traffic flows, network segments are spread across multiple access switches, requiring hosts that were once
Access Switch 1
Core Switch 1 Core Switch 2
Aggregation Switch 1 Aggregation Switch 2
Access Switch 2 Access Switch 3 Access Switch 4
Figure 1: The traditional, three-layer network design
Hear the author explain this chart at Modern Infrastructure online
MODERN INFRASTRUCTURE • SEPTEMBER 2014 21
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
located on the same switch to traverse network intercon-nection points. There are at least two major trends that have contributed to this phenomenon of east-west traffic becoming prevalent:
n Convergence. Storage traffic often shares the same physical network as application traffic. Storage traffic is usually between hosts and arrays that are in the same network segment, logically right next to each other.
Not-So-Futuristic Network TopologiesTHE REASON ALTERNATE and emerging designs exist is because they address specific issues for specific applications.
Alternatively, these newer designs rethink network design theory completely, moving network intelligence into the
hosts and using those hosts as forwarding nodes in addition to traditional switches. Mainstream networks might
not need that sort of capability today, but emerging trends often trickle down to the mainstream. While they might
not be what’s now, they could well be what’s next.
There are a few other generally accepted network topologies beyond the traditional three-layer network and leaf-
spine options. While they are less commonly found in real-world deployments, they are relevant and well-understood.
n Multi-tier leaf-spine. One approach to scaling a leaf-spine network horizontally while maintaining an acceptable
oversubscription ratio is to add a second vertical leaf layer. I explore this idea in some detail on my blog here.
n Hypercube. A simple 3-D hypercube network is really just a cube: a six-sided box with switches at each corner. A
4-D hypercube (aka a tesseract) is a cube within a cube, with switches at the corners connected to each other. The
inner cube connects to the outer cube at the corners. Hosts connect to the switches on the outer cube. An orga-
nization needs to understand its application traffic flows in detail to know whether or not a hypercube topology
is worth considering.
n Toroidal. This term refers to any ring-shaped topology. A 3-D torus is a highly structured internetwork of rings.
Toroids are a popular option in high-performance computing environments and may or may not rely on switches
to interconnect between compute nodes. n
MODERN INFRASTRUCTURE • SEPTEMBER 2014 22
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
n Virtualization. As IT continues to virtualize physical hosts into virtual machines, the ability to move those workloads around easily has become a mainstream, nor-mative function. When virtual machines move, they do so from physical host to physical host within a network segment.
Running east-west traffic through a network topology that was designed for north-south traffic creates the issue of oversubscription of interconnection links between layers. If hosts on one access switch need to talk at a high speed with hosts attached to another access switch, the uplinks between the access layer and aggregation become
a potential, and indeed probable, congestion point. The spanning-tree protocol used in three-tier network designs often exacerbates the connection issue. Because span-ning-tree blocks redundant links to prevent loops, access switches with dual uplinks are only able to use one of the links for a given network segment.
Adding more bandwidth between the layers in the form of faster inter-switch links is a logical solution to help overcome the aforementioned congestion. This helps the three-layer model scale, but only to a point. The problem of host-to-host east-west traffic doesn’t just happen one conversation at a time. Instead, hosts are talking to other hosts all over the data center at any given time, all the
Leaf Switch 1
Spine Switch 1
Core Switch 1 Core Switch 1
Spine Switch 1 Spine Switch 2 Spine Switch 1
Leaf Switch 2 Leaf Switch 3 Leaf Switch 4 Leaf Switch 5 Leaf Switch 6 Leaf Switch 7 Leaf Switch 8
Figure 2: A small leaf-spine network
Hear the author explain this chart at Modern Infrastructure online
MODERN INFRASTRUCTURE • SEPTEMBER 2014 23
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
Outside the Box Topologiesn The Jellyfish topology is largely random. In this design, switches are interconnected based on the preference of
the network designer. In research studies, testing Jellyfish designs resulted in 25% higher capacity over traditional
network topologies.
n Scafida, aka “scale-free” network topologies, are somewhat like Jellyfish in that there is randomness about
them, but paradoxically in that randomness, more structure becomes apparent. The idea behind Scafida is that
certain switches end up as densely connected hub sites, similar to the way an airline manages flight patterns.
Scale-free advocates point out the similarity of Scafida to biological networks that have evolved in nature.
n DCell leverages the fact that many servers ship with multiple network interface cards. Some of these NICs are
used to connect directly from one server to another in a cell, while others are used to interconnect via a switch to
other cells. DCell assumes a server has four or more NICs.
n Similar to DCell, FiConn uses a hierarchy of server-to-server interconnects and cells, but only assumes two NICs.
n Like DCell and FiConn, BCube uses extra server ports for direct communication, but is optimized specifically
for modular data centers that are deployed as shipping containers. Microsoft, the power behind BCube, built the
BCube Source Routing protocol to manage forwarding across this network topology.
n Another Microsoft project is CamCube, effectively a 3-D torus running Microsoft’s CamCubeOS on top. The pur-
pose is to optimize traffic flow across the torus while it is being used to interconnect clusters of hosts. CamCubeOS
assumes that traditional network forwarding paradigms are ineffective in this application and replaces them.
n Google’s flattened butterfly is a specific network construct akin to a chessboard. In this grid of switches, traffic
can move to any switch in a given dimension, like a rook in the game of chess. The point of this novel idea is to re-
duce power consumption, a topic of great concern to Google, one of the largest operators of data centers in the
world. n
MODERN INFRASTRUCTURE • SEPTEMBER 2014 24
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
time. So while adding bandwidth helps facilitate these conversations, it’s only part of the solution.
The rest of the solution is to add switches at the layer below the access layer, and then spread the links from the access layer to the next layer, across the network. This topology is called a leaf-spine. The strength of a leaf-spine design is its ability to scale horizontally through the addition of spine switches, something that spanning-tree deployments with a traditional three-layer design cannot do.
Sharp-eyed readers will note that this appears similar to the traditional three-layer design, just with more switches in the spine layer. Aside from more switches, what is the key difference, then? In a leaf-spine topology, all links are used to forward traffic, often using modern spanning-tree protocol replacements such as Transparent Interconnec-tion of Lots of Links (TRILL) or Shortest Path Bridging (SPB). TRILL and SPB are designed to provide forward-ing across all available links, while still maintaining a loop-free network topology, similar to routed networks.
Spine Switch 1
Leaf Switch 1 Leaf Switch 2 Leaf Switch 3 Leaf Switch 4
Spine Switch 2
48 Hosts 48 Hosts 48 Hosts 48 Hosts
10Gx48 = 480 Gbps
40Gx48 = 160 Gbps
3:1 RATIO
Figure 3: Oversubscription between network layers
Hear the author explain this chart at Modern Infrastructure online
MODERN INFRASTRUCTURE • SEPTEMBER 2014 25
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
Cloud-scale data centers go even beyond SPB and TRILL and use a combination of routing in-between layers and a network virtualization overlay such as VXLAN to create network segments.
THE ADVANTAGES OF LEAF-SPINE
When reviewing vendor literature and reference designs, it becomes apparent that leaf-spine topologies have be-come the de facto standard. In fact, it’s difficult to find a design other than leaf-spine among vendors’ various Ethernet fabric designs. There are good reasons for this: Leaf-spine has several desirable characteristics that play into the hands of network designers needing to optimize east-west traffic.
n All east-west hosts are equidistant from one another.
Leaf-spine takes the idea of the access and aggregation layers from the traditional design and widens it. A host on any particular leaf switch can talk to a host on any other leaf switch and know for certain that the traffic will only traverse three switches: the ingress leaf switch, a spine switch and the egress leaf switch. As a result, ap-plications running over this network infrastructure will behave predictably, which is a key feature for organiza-tions running multi-tiered Web applications, high-per-formance computing clusters or high-frequency trading.
n Leaf-spine uses all interconnection links. One aspect of the traditional three-layer design is the use of span-ning-tree, a loop prevention protocol. As mentioned
earlier, spanning-tree detects loops, and then block links forming the loop. This means that dual-homed access switches only use one of their two uplinks. Modern al-ternatives to spanning-tree such as SPB and TRILL allow all links between leaf and spine to be used for forwarding traffic, allowing the network to scale as traffic grows.
n It supports fixed configuration switches. Fixed con- figuration switches ship with a specific number of ports, compared with chassis switches, which feature modular slots that can be filled with line cards to meet port density requirements. Chassis switches tend to be quite costly compared to fixed configuration switches, but they are needed in traditional three-layer topologies where large numbers of switches from one layer are connecting to two switches at the next layer. Leaf-spine allows for interconnections to be spread across a large number of spine switches, obviating the need for mas-sive chassis switches in some leaf-spine designs. While chassis switches certainly can be used in the spine layer, many organizations find a cost savings in deploying fixed-switch spines.
THE CONS OF LEAF-SPINE
For all its merits, leaf-spine has its shortcomings. One drawback is that the switch count to gain the required scale is potentially high. Leaf-spine topologies need to scale up to the point that they can support all of the phys-ical hosts that will need to connect to them. The larger the number of leaf switches needed to uplink all of the
MODERN INFRASTRUCTURE • SEPTEMBER 2014 26
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
physical hosts, the wider the spine needs to be to accom-modate them.
A spine can only extend to a certain point before either the spine switches are out of ports and unable to intercon-nect more leaf switches, or the oversubscription rate be-tween the leaf and spine layers is unacceptable. In general,
a 3:1 oversubscription rate between leaf and spine layer is deemed acceptable. For example, 48 hosts connecting to the leaf layer at 10 Gbps use a potential maximum of 480 Gbps. If the leaf layer connects to the spine layer using four 40 Gbps uplinks, the interconnect bandwidth is 160 Gbps, for an oversubscription ratio of 3:1.
Another disadvantage of leaf-spine networks is that they have significant cabling requirements. The number of
cables required between the leaf and spine layer increases with the addition of a new spine switch. The wider the spine, the more interconnects are required. The challenge for data center managers is structuring cabling plants to have sufficient fiber optic strands to interconnect the lay-ers. In addition, interconnecting switches at distances of dozens of meters requires expensive optical modules, add-ing to the overall cost of a leaf-spine deployment. While there are budget-priced copper modules useful for short distances, optical modules are almost necessary and are a significant cost in any modern data center.
As far as industry trends go, leaf-spine design is cur-rently favored for data center topologies of almost any size. Leaf-spine is predictable, scalable and solves the very common east-west traffic problem. Any organization whose IT infrastructure is moving toward convergence and high levels of virtualization should evaluate a leaf-spine network topology. n
ETHAN BANKS has been managing networks since 1995. Ethan co-hosts the Packet Pushers Podcast. Contact him at @ecbanks and visit ethancbanks.com.
LEAF-SPINE DESIGN IS CURRENTLY FAVORED FOR DATA CENTER TOPOLOGIES OF ALMOST ANY SIZE.
MODERN INFRASTRUCTURE • SEPTEMBER 2014 27
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
IN THE GOOD old days of IT, we built clear silos of domain expertise and for good reason: IT infrastructure was com-plicated. Server folks monitored compute hosts, storage admins wrangled disks and network people untangled wires. Having parallel domains was seen as the best way to optimize IT. With a clean separation of concerns, the theory was that you could run IT as efficiently as possible, allowing experts to learn specialized skills, deploy do-main-specific hardware and manage complex resources.
Except that dealing with multiple IT domains was never optimal for business users, application owners, IT financial management, data center facilities folks or the rest of the ecosystem. When IT is organized into silos, anytime there is problem—troubleshooting application
performance, competing for rack space, or allocating a limited budget—the resulting infighting, finger-point-ing and political posturing just wastes valuable time and money. And that’s not even mentioning just how not-very-interoperable heterogeneous infrastructure can be, despite standardized protocols and supposedly thor-ough vendor validation testing.
The siloed world may be a comfortable place for sub-ject matter experts in their own domains, but it is not IT at its best. For someone outside IT, having to navigate a byzantine organization just to try out new things can stifle business creativity and innovation. Things are beginning to change and IT is breaking down those walls. In fact, it could be argued that all the tectonic shifts in IT within the last decade tear down walls. Everything points to a massive shift in how IT will be organized and staffed.
VIRTUALIZATION SUCKS UP SILOS
Certainly virtualization plays a big role in collapsing IT management domains. At first, virtualization solutions focused on freeing up the server domain, aggregating and pooling physical hosts to serve out idealized virtual machine images.
That was a big improvement. In my early days as a sys-tems management consultant, I remember packed corpo-rate data centers where well over half of the hosts lay idle or barely used. Each was dedicated to some little-used (but always-important) application. The amount of over-provi-sioning was incredible. Not only did virtualization reclaim that excess capacity, it also homogenized the underlying
THE NEXT BIG THING
IT: Tear Down This Wall!It’s time to knock down those silos, one by one. BY MIKE MATCHETT
MODERN INFRASTRUCTURE • SEPTEMBER 2014 28
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
physical resources and thus simplified infrastructure management.
Virtualization, which is now ubiquitous, has evolved to include more than servers. Storage and networking can be virtualized at different levels. IT can deploy variations of these resources as virtual appliances (e.g., HP StoreVirtual VSA), and hypervisors are beginning to integrate these re-sources directly in the kernel (e.g., VMware Virtual SAN). While not all workloads are best served with virtualized resources, we do see the virtualization admin taking on more direct control of and responsibility for operating the end-to-end infrastructure.
CLOUDS MAKE IT FOGGY
When a business user pulls out a credit card to subscribe to a SaaS application, there are no silos to deal with. In-ternal IT organizations will have to evolve into IT service providers as well, if only to keep up with the external competition. As a service provider, the focus shifts to providing value through service delivery and away from silo management.
We expect that most IT clouds will be hybrid, mean-ing that IT will have to take advantage of—essentially broker—where workloads run for greatest advantage at the lowest cost. IT will need to be able to inter-operate data and workloads between on-premises infrastructure and cloud-hosted assets, using elastic provisioning and dynamic subscription pricing. But this won’t happen as long as IT is organized into independently managed silos that regularly butt heads and compete for budget.
CONVERGENCE LOOMS LARGE
Perhaps the most direct trend aiming to tear down IT silos is convergence. In the simple version of convergence, IT vendors like Dell, HP, IBM, VCE and system integrators (using reference architectures) pre-package racks of ex-isting server, storage and networking hardware together, usually with a hypervisor pre-installed and some unified element management solution layered on top.
Converged systems deliver plug-and-play IT infrastruc-ture, but we also understand that converged systems might still require deep silo expertise. The main attraction, more than anything else, is quick deployment. In other words, the success of converged solutions stems from the failure of their respective components to interoperate (or auton-omously install) nicely out of the box.
Going a step further, hyperconvergence systems from vendors like Nimboxx, Nutanix, Scale Computing and Simplivity ship boxed appliances with server, storage and networking all baked together inside. Beyond simplifica-tion and domain convergence, hyperconverged products offer internal, built-in optimizations (e.g., global inline
HYPERCONVERGENCE SYSTEMS SHIP BOXED APPLIANCES WITH SERVER, STORAGE AND NETWORKING ALL BAKED TOGETHER INSIDE.
MODERN INFRASTRUCTURE • SEPTEMBER 2014 29
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
dedupe) and plug-and-play scale-out growth. There are also competing “hyperconvergence reference
architecture” offerings assembled by system integrators that leverage software-defined infrastructure (e.g., Maxta) on commodity servers. The key is that hyperconverged infrastructure doesn’t come with discrete silos.
IT ALL BOILS DOWN TO MANAGEMENT
To make it all work, IT management products will need to include cross-domain features and also incorporate hybrid cloud coverage. IT management will shift from element monitoring toward ensuring delivered service qualities. Automation, policies and expert systems will come to replace a lot of the need for on-premises subject-matter expertise.
So what happens to IT organizations? In this brave new world, IT might reorganize around a virtual-hybrid cloud
admin who will manage and align IT services to appli-cations, a cloud service “broker” who will do high-level capacity planning to optimize across on- and off-premises IT, and an infrastructure/facilities owner who will build competitive private data centers using converged building blocks.
On the application side, we see a similar evolution toward data “leverage” instead of data management, ap-plication operations (services with DevOps) instead of just programming, and system architecture that encompasses not only hybrid solutions, but also up- and down-stream IT (e.g., suppliers and clients).
In all cases, IT will transform from a siloed set of reac-tive cost centers into a service provider with a focus on helping the business compete. What does your IT staffing future look like? n
MIKE MATCHETT is a senior analyst and consultant at Taneja Group.
MODERN INFRASTRUCTURE • SEPTEMBER 2014 30
ONE OF THE early alleged benefits of VDI that vendors pushed in the mid-2000s was that virtual desktops are eas-ier to manage. They claimed that with VDI, a number of users could share a single master disk image, so a software patch or an application update would have to be installed only once into the master image, and voila!—all the users would be instantly updated.
Contrast that with the traditional desktop environment, where some poor schmuck has to manually update each desktop, one by one, for every change. (Even remote soft-ware distribution platforms like Altiris or SCCM involve a lot of complexity around building packages, scheduling
the software pushes, cleaning up the remnants, etc.)When we have that single shared disk image, we say
that those are “nonpersistent” disk images because the disk images do not persist between reboots. No matter what the users do while they’re logged on, their changes are discarded when they log off, and they get a brand-new copy of the original desktop the next time they log on. In this case, only the administrator can update that master shared image.
These nonpersistent desktops are the opposite of “per-sistent” desktops, where everything the users change is still there the next time they log on. Persistent desktops are the more traditional style of desktops. They’re what most laptops and desktop computers in the world are today.
So you can see why many people were excited over the notion of just having to install software once for hundreds of users if they were to move to nonpersistent VDI.
There’s a major problem with this notion, though. While nonpersistent desktops are theoretically easier to manage, the reality is that the past 20 years of desktop and laptop management is based on persistent images. So how do you magically go from an environment where each user has his own custom environment to a scenario where all your users are sharing a single master image?
The answer is you don’t.VDI vendors such as Citrix and Microsoft tried to
END-USER ADVOCATE
Let’s Revisit Nonpersistent DesktopsThe dream of managing a single master disk image has yet to materialize. BY BRIAN MADDEN
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
MODERN INFRASTRUCTURE • SEPTEMBER 2014 31
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
minimize this complexity, claiming that you could use application virtualization products like Microsoft App-V, VMware ThinApp or Symantec Workspace Virtualization to “virtualize” apps so that they could be delivered on de-
mand into each user’s Windows environment after he or she logs in. In this scenario, a user logs in and gets access to the generic, shared, “nonpersistent” desktop, and then the app virtualization tool kicks in to deliver nicely pack-aged applications.
Again, this sounds great at first, but the unfortunate reality is that even the best application virtualization solutions only have about a 70% to 80% compatibility rate
with existing Windows applications. So what are compa-nies to do with their other 20% to 30% of applications? The VDI vendors would tell them to just install them into the “base” shared image, but now that puts organizations right back where they started—where they have to main-tain different base images for different users, manage the updates to those apps and also manage the newly intro-duced complexity of application virtualization.
No thanks!These are the reasons why nonpersistent VDI never
took off like people first thought it would six or eight years ago. And it’s why VDI experts pushed people toward fully persistent VDI.
If you look at the technology on the market today, we now have new approaches to nonpersistent VDI, which offer 100% application compatibility, including prod- ucts from CloudVolumes, FSLogix and Unidesk. Next month, we’ll dig into how you can use these. In the mean-time, start thinking about what it would be like if you could actually get the promised benefits of nonpersistent VDI! n
BRIAN MADDEN is an opinionated, supertechnical, fiercely indepen-dent desktop virtualization and consumerization expert. Write to him at [email protected].
WHILE NONPERSISTENT DESKTOPS ARE THEORETI- CALLY EASIER TO MANAGE, THE REALITY IS THAT THE PAST 20 YEARS OF DESKTOP AND LAPTOP MANAGEMENT IS BASED ON PERSISTENT IMAGES.
MODERN INFRASTRUCTURE • SEPTEMBER 2014 32
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
WHEN EVERYTHING IS working right, people aren’t learning anything. This adage holds true for the military, athletic organizations and certainly IT.
Take VMware vSphere, for instance. When your orga-nization installed it, did it really learn anything? Sure, you probably had some training, and you learned where things like network configurations and the VM settings are located. But when did you really learn something? For me, the real learning started the day that one of the storage arrays didn’t get along with the vSphere environment. Or perhaps it was the day we needed to clone a VM’s snap-shot, which isn’t an option in the GUI. On that day, much
was learned about the infinitely powerful vSphere Pow-erCLI—lessons that opened doors to many good things.
Think about a recent project that went smoothly. Did anybody really learn anything? Ask yourself what happens when something is wrong. Let’s say that an important business process fails to run overnight. Can your staff fix it, or do you need a consultant or a support call? What happens after that call? Does your organization record the fix and related steps in a knowledgebase or wiki so the information is available in the future, or do you just thank God that it got fixed and move on? Was there a lot of yelling and finger pointing during the problem, or was it dealt with calmly and professionally?
These are important questions, because the way an organization deals with failure tells you a lot about its cul-ture. Some organizations handle failure extremely poorly, with managers roaming the halls screaming at people and firing them in the middle of the outage. Does behavior like that lengthen or shorten an outage? What does that tell employees about taking risks, even calculated ones? What does that tell you about the employees that still work there? I know a number of organizations whose employees are so scared of being blamed for anything that they won’t even apply desperately needed security patches to their systems. It’s easier to count the systems that don’t have massive security problems than the opposite because of this pervasive fear-driven culture.
IN THE MIX
Failure Is Sometimes the Best OptionWhen things go wrong, the real learning starts. BY BOB PLANKERS
MODERN INFRASTRUCTURE • SEPTEMBER 2014 33
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
Some IT teams just freeze up when they encounter failure. They don’t know what to do or where to begin, so they just don’t do anything. Maybe the problem will fix itself, or maybe someone else will step up and fix it. People work around the problems, sometimes going outside of IT for solutions in the cloud. That isn’t good, either, because “shadow IT” expenditures should be avoided at all costs. I know of one organization, no longer in existence, whose inventory systems became so broken over time that the company ended up switching to pen and paper. Yes, re-ally—pen and paper. No shadow IT, but also no company now, either.
My favorite kind of organization is the type that treats failure as a learning opportunity. They keep blame to a minimum, even during post-mortem analysis, because defensive people aren’t open to learning. These organiza-tions stay focused on the problems at hand, and work as
a team to get things done. This lends itself to both profes-sionalism and honesty, which generates frank discussions of problems and solutions. Experimentation and failure are also encouraged as part of new implementations and upgrades. Failure of this sort isn’t seen as a risk or as a de-tour, but as a way to find the best path forward. This ideal organization also embraces the DevOps and lean software development ideas of “fail fast, learn rapidly.” Employees learn how to make good decisions, take good calculated risks, and they succeed.
Consider your organization. What will it take to start encouraging better risk-taking and less blame? Or, if it is hopeless, why are you still there? n
BOB PLANKERS is a virtualization and cloud architect at a major Midwestern university. He is also the author of The Lone Sysadmin blog.
Home
Editor’s Letter
Cloud Migration Confidential
Survey Says: It’s All About the Cloud
The Kids Are Plugged In
Getting to DevOps
Overheard
First Look: Is Docker Right for You?
The Case for Leaf-Spine
Matchett: IT: Tear Down This Wall!
Madden: Nonpersistent Desktops Revisited
Plankers: Failure Is Sometimes the Best Option
MODERN INFRASTRUCTURE • SEPTEMBER 2014 34
Modern Infrastructure is a SearchDataCenter.com publication.
Margie Semilof, Editorial Director
Alex Barrett, Editor in Chief
Christine Cignoli, Senior Site Editor
Phil Sweeney, Managing Editor
Eugene Demaitre, Associate Managing Editor
Patrick Hammond, Associate Features Editor
Martha Moore, Production Editor
Linda Koury, Director of Online Design
Rebecca Kitchens, Publisher, [email protected]
TechTarget, 275 Grove Street, Newton, MA 02466 www.techtarget.com
© 2014 TechTarget Inc. No part of this publication may be transmitted or reproduced in any form or by any means without written permission from the publisher. TechTarget reprints are available through The YGS Group.
About TechTarget: TechTarget publishes media for information technology professionals. More than 100 focused websites enable quick access to a deep store of news, advice and analysis about the technologies, products and processes crucial to your job. Our live and virtual events give you direct access to independent expert commentary and advice. At IT
Knowledge Exchange, our social community, you can get advice and share solutions with peers and experts.
COVER PHOTOGRAPH AND PAGE 3: SOBERP/THINKSTOCK
Follow
@moderninfra
on Twitter!