f5 acopia arx product demonstration
DESCRIPTION
F5 Acopia ARX Product Demonstration. Troy Alexander Field Systems Engineer. Agenda. Acopia Technology Introduction Product Demonstration. Introducing F5’s Acopia Technology. The Constraints of Today’s Infrastructure. Complex Mixed vendors, platforms, file systems Inflexible - PowerPoint PPT PresentationTRANSCRIPT
F5 Acopia ARXProduct Demonstration
F5 Acopia ARXProduct Demonstration
Troy AlexanderField Systems Engineer
2
Agenda
Acopia Technology Introduction
Product Demonstration
3
Introducing F5’s Acopia Technology
4
The cost of managing storage is five to ten times the acquisition costThe cost of managing storage is five to ten times the acquisition cost
The Constraints of Today’s Infrastructure
Complex– Mixed vendors, platforms,
file systems
Inflexible – Access is tightly coupled to
file location
– Disruptive to move data
Inefficient– Resources are under and
over utilized
Growing rapidly– 70% annually (80% are files)
5
Virtualization Breaks the Constraints
Simplified access– Consolidated, persistent
access points
Flexibility– Data location not bound to
physical resources
Optimized utilization– Balances load across
shared resources
Leverages technology– Freedom to choose most
appropriate file storage
“File virtualization is the hottest new storage technology in plan today…” (TheInfoPro)“File virtualization is the hottest new storage technology in plan today…” (TheInfoPro)
6
Where Does Acopia Fit?
Plugs into existing IP / Ethernet switchesVirtualizes heterogeneous file storage devices that present file systems via NFS and/or CIFS– NAS, File Servers, Gateways
Does not connect directly to SAN– Can manage SAN data presented
through a gateway or serverNo changes to existing infrastructure– ARX appears as a normal NAS
device to clients– ARX appears as a normal CIFS or
NFS client to storageSAN virtualization manages blocks, Acopia manages files– Data management vs. storage
managementIBM HP EMC HDS NTAP
SAN
LAN
SAN VirtualizationSAN Virtualization
Acopia File VirtualizationAcopia File Virtualization
SAN
Blocks
Users and Applications
NAS File Servers
7
What does F5’s Acopia do?
• Automates common storage management tasks
– Migration
– Storage Tiering
– Load Balancing• These tasks now take place without affecting access
to the file data or requiring client re-configuration
During the demo the F5’s Acopia ARX will virtualize a multi-protocol (NFS & CIFS) environment
F5 Acopia provides the same functionality for NFS, CIFS and multi-protocol
8
What are the F5`s Acopia differentiators ?
Purpose-built to meet challenges of global file management – Separate data (I/O) & control planes with dedicated resources – Enterprise scale: >2B files, 24 Gbps in a single switch
Real-time management of live data– Unique dynamic load balancing– Unique in-line file placement (files are not placed on the wrong share
and then migrated after the fact)– No reliance on stubs or redirection
Superior reliability, availability and supportability – Integrity of in-flight operations ensured with redundant NVRAM – Enterprise server class redundancy– Comprehensive logging, reporting, SNMP, call-home, port mirroring
Proven in Fortune 1000 enterprises– Merrill Lynch, Bear Stearns, Yahoo, Warner Music, Toshiba, United
Rentals, Novartis, Raytheon, The Hartford, Dreamworks, etc.
9
How is F5 Acopia Architecture ?
Patented tiered architecture separates data & control paths– Data path handles non-metadata operations at wire speed – Control path handles operations that affect metadata & migration
Each path has dedicated processing and memory resources and each can scale independently; unique scale and availability
PC-based appliances inadequate – single PCI bus, processor & shared memory
Clients
NAS & File Servers
Adaptive Resource Switch
Data Path(Fast Path)
TransactionLog
Control Path
Remote transaction log (mirror)
Local
10
How will F5 Acopia Virtualization work ?
Virtual IP
“Virtual Volume Manager”Routes virtual path to physical path
Virtual Volume
NetApp Volume
EMC Volume
NAS Volume
Applications, Users
arx:/eng/project1/spec.doc
Virtual Path
na:/vol/vol2/project1/spec.doc
ILM operation 1
11
How will F5’s Acopia virtualization work ?
Virtual IP
“Virtual Volume Manager”Routes virtual path to physical path
Virtual Volume
NetApp Volume
EMC Volume
NAS Volume
Applications, Users
arx:/eng/project1/spec.doc
Virtual Pathdoes not change
ILM operation 1
emc:/vol/vol2/project1/spec.doc
New Physical Path
12
How will F5’s Acopia virtualization work ?
Virtual IP
“Virtual Volume Manager”Routes virtual path to physical path
Virtual Volume
NetApp Volume
EMC Volume
NAS Volume
Applications, Users
arx:/eng/project1/spec.doc
Virtual Pathdoes not change
ILM operation 2
nas:/vol/vol1/project1/spec.doc
13
F5’s Acopia virtualization layers
Presentation Namespace
Adaptive Resource Switch (ARX)
Users & Application Servers
NAS & File Servers NAS & File Servers
Mount Point
Attach Points
Virtual VolumeManager
Presentation Namespace (PNS)
14
F5’s Acopia architecture
CONTROLPLANE
DATAPLANE
(Fast Path)
ASMSCM
NSM
Control
RAIDDrives
Temperature sensorsFan sensors
Power sensors
MgmtInterface
Data
• Wire speed, low latency • Non-metadata
operations, e.g. file read / write
• In-line policy enforcement
• High performance SMP architecture
• Metadata services
• Policy
NVR
15
ARX Architecture Differentiators
Network switch purpose-built to meet challenges of global file management– Real-time management of live data
3 tiered architecture provides superior scale & reliability– Separate I/O, control & management planes– Dedicated resources for each plane– Each plane can be scaled independently
PC-based appliances are inadequate– Single PCI bus, processor & shared memory – Bottleneck!
• File I/O, policy, management all share same resources – Single points of failure
Data integrity and reliability – Integrity of in-flight operations ensured with redundant NVRAM – External metadata and ability to repair & reconstruct metadata
16
How will Acopia work?
ARX acts as a proxy for all file servers / NAS devices– The ARX resides logically in-line
– Uses virtual IP addresses to proxy backend devices
Proxies NFS and CIFS traffic
Provides virtual to physical mapping of the file systems – Managed Volumes are configured & imported
– Presentation volumes are configured
17
What is File Routing or Metadata ?
Metadata will be stored on a highly available filerNo proprietary data is stored in the metadata; the metadata can be completely rebuilt (100% disposable)ARX ensures integrity of in-flight operations– If the ARX loses power or is reset, the NVRAM has a list of the
outstanding transactions– When the ARX is booted back up, before it services any user
requests, it validates all of the pending transactions in the NVRAM & takes the appropriate action
– Ensures transactions are performed in the correct order
ARX provides tools to detect / repair / rebuild metadata
18
Filesets and Placement Policy Rules
Placement rules migrate filesRules use filesets as sources– Filesets supply matching criteria for policy rules– Filesets can match files based on age, size or “name”
• Age = groups files based on last accessed or last modified date / time
• “Name” = matches any portion of a file’s name using Simple criteria (similar to DOS/Unix, e.g. *.ppt) or POSIX compliant Regular Expressions (e.g. [a-z]*\.txt)
– Filesets can be combined to form unions or intersections
Place rules can target a specific share or a share farm
19
What are Acopia’s Policy Differentiators
Load balancing is unique to Acopia – No other virtualization device is able to do in-line placement and
real time load balancing
Multi-protocol, multi-vendor migration is unique to AcopiaAbility to tier storage without requiring stubs is unique to AcopiaIn-line policy enforcement is unique to Acopia– Competitive solutions require expensive “treewalks” to
determine what to move / replicate
Flexibility and scale of migration / replication capability is unique to Acopia– From individual file / fileset to an entire virtual volumes
20
High availability Overview
ARX’s are typically deployed in a redundant pairThe primary ARX keeps synchronized state with the secondary ARX – In-flight transactions (NVRAM), Global Configuration, Network
Lock Manager Clients (NLM), Duplicate Request Cache (DRC)The ARX will monitor resources to determine failover criteria– The operator can optionally define certain resources to be
“critical” and be considered for failover criteria, e.g. default gateway, critical share, etc.
The ARX does not store any user data on the switches
21
How to Deploy Acopia in the Network
ARX HA subsystem uses a 3 voting systems to avoid “split brain” scenarios– “Split Brain” is a situation
where a loss in communication causes both devices in an HA pair to service traffic requests which can result in data corruption
Heartbeats are exchanged between – The primary switch – The standby switch – The Quorum Disk
• The Quorum Disk is a share on a server/filer
Acopia HA Pair
Switch A Switch B
Client
Workgroup Switches
DistributionSwitches
Core Routers / Layer 3 Switches
NAS & File Servers
Quorum Disk
22
The Demo Topology
23
Acopia Demo – What Will be Shown
Data Migration
Tiering
Load Balancing (Share Farm)
Inline policy and file placement by name
Shadow Volume Replication
24
Data Migration
25
Usage Scenario: Data Migration
Movement of files between heterogeneous file servers
Drivers: – Lease rollover, vendor
switch, platform upgrades, NAS consolidation
Benefits:– Reduce outages and
business disruption
– Faster migrations
– Lower operational overhead• No client reconfiguration,
automated
26
Data Migration with AcopiaSolution:
– Transparent migration at any time
– Paths and embedded links are preserved
– File-level granularity, without links or stubs
– NFS and CIFS support across multiple vendors
– Scheduled policies to automate data migration
– CIFS Local Group translation
– CIFS share replication
– Optional data retention on source
– IBM uses ARX for its data migration services
Benefits:– Reduce outages and business disruption
– Lower operational overhead• No client reconfiguration
• Decommissioning without disruption
• Automation
27
Data Migration (One for One)
NAS-1
NAS-2
Client View
home on “server1” (U:)
F5 Acopia ARX
Transparent migrations occur at the file system level via standard CIFS / NFS protocolsA file system is migrated in its entirety to a single target file systemAll names, paths, and embedded links are preservedMultiple file systems can be migrated in parallelInline policy steers all new file creations to target filer
– No need to go back and re-scan like Robocopy or Rsync– No need to quiesce clients to pick up final changes
File systems are probed by the ARX to ensure compatibility before mergingThe ARX uses no linking or stub technology so it is backed out easily
28
Data Migration (One for One)
NAS-1
NAS-2
Client View
home on “server1” (U:)
Acopia ARX
File and directory structure is identical on source and target file systemsAll CIFS and NFS file system security is preserved
– If CIFS local groups are in use SIDs will be translated by the ARX
File Modification and Access times are not altered during the migration– The ARX will preserve the create time (depending on the filer)
The ARX can perform a true multiprotocol migration where both CIFS and NFS attributes/permissions are transferred
– Robocopy does CIFS only, Rsync NFS only
The ARX can optionally replicate CIFS shares and associated share permissions to the target filer
29
Data Migration: Fan Out
NAS-1
NAS-2
Client View
home on “server1” (U:)
Acopia ARX
NAS-3
NAS-4
Fan out migration allows admin to change sub optimal data layout due to reactive data management policiesStructure can be re-introduced into the environment via fileset based policies that allow for migrations using:
• Anything in the file name or extension• File path• File age (last modify or last access)• File size• Include of exclude (all files except for)• Any combination (union or intersection)• For the more advanced user regular
expression matching is available
Rules operate “in-line”– Any new files created are automatically
created on the target storage, no need to re-scan the source again
30
Data Migration: Fan In
NAS-1
NAS-2
Client View
home on “server1” (U:)
Acopia ARX
NAS-3
NAS-4
Fan in migration allows admin to take advantage of larger/more flexible file system capabilities on new NAS platformsSeparate file systems can be merged an migrated into a single file systemThe ARX can perform a detailed file system collision analysis before merging file systems
– A collision report is generated for each file system
– Admin can choose to manually remove collisions or let the ARX rename offending files and directories
Like directories will be merged– Clients see aggregated directroy
In cases where the directory name is the same but has different permissions the ARX can synchronize the directory attributes
31
Data Migration: Name Preservation
\\Server1
Client View
home on “\\server1” (U:)
Acopia ARX
\\Server1-old
Acopia utilizes a name based takeover method for migrations in most cases– No 3rd party namespace technology is required, however DFS can be layered over ARX
All client presentations (names, shares, exports, mount security) are preserved
The source filer’s CIFS name is renamed and then the original name is transferred to the ARX allowing for transparent insertion of the ARX solution
– This will help avoid issues with embedded links in MS-Office documents
The ARX will join Active Directory with the original source filer CIFS name
If WINS was enabled on the source filer it is disabled, and the ARX will assume the advertisement of any WINS aliases
For NFS the source filers DNS entry is updated to point to the ARX– Or auto-mounter/DFS maps can be updated
The ARX can assume the filers IP address if needed
32
Case Study: NAS Consolidation
“Acopia's products allow us to consolidate our back-end storage resources while providing data access to our users without disruption.”
Chief Technology Architect
“Acopia's products allow us to consolidate our back-end storage resources while providing data access to our users without disruption.”
Chief Technology Architect
Environment: Windows file servers, NAS
Critical Issue: Large scale file server to NAS consolidation; 24x7 environment
Reasons: Cost savings in rack space, power, cooling and operations
Requirements: Move the data without disrupting the business
Solution: ARX6000 clusters
Result: >80 file servers migrated to NAS without disruption
Migrations completed faster, with less intervention
One of the world's leading financial services companies, with global presence
33
Migration Demonstration
34
Acopia Demo Topology
ARX500
Windows Client
J: L:
Virtual VolumeV:
Virtual File Systems
Physical File Systems
VirtualView
Virtual Server
Layer 2 Switch
PhysicalView
35
Storage TieringInformation Lifecycle Management
36
Usage Scenario: Tiering / ILM
Match cost of storage to business value of data– Files are automatically
moved between tiers based on flexible criteria such as age, type, size, etc.
Drivers:– Storage cost savings, backup
efficiencies, compliance
Benefits:– Reduced CAPEX
– Reduced backup windows and infrastructure costs
37
Storage Tiering with F5 Acopia
Solution:– Automated, non-disruptive data
placement of flexibly defined filesets
– Multi-vendor, multi-platform
– Clean (no stubs or links)
– File movement can be scheduled
Benefits:– Reduced CAPEX
• Leverage cost effective storage
– Reduced OPEX
– Reduced backup windows and infrastructure costs
38
Storage Tiering with F5 Acopia
Can be applied to all data or a subset via filesets
Operates on either last access or last modify time
The ARX can run tentative “what if” reports to allow for proper provisioning of lower tiers
Files accessed or modified on lower tiers can be brought up to tier 1 dynamically
39
“Based upon these savings, we estimate that we will enjoy a return on our Acopia investment in well under a
year.”
Reinhard Frumm, Director Distributed IS, Messe Dusseldorf
“Based upon these savings, we estimate that we will enjoy a return on our Acopia investment in well under a
year.”
Reinhard Frumm, Director Distributed IS, Messe Dusseldorf
Storage Tiering Case Study
Tier 1 Tier 2
NetApp 3020
NetApp 940c
AcopiaARX1000
Users and Applications
International trade-show company
Challenges– Move less business critical data
to less expensive storage, non-disruptively to users
Solution– ARX1000 cluster
Benefits– 50% reduction in disk spend
– Dramatic reduction in backup windows (from ~14 hours to ~3 hours) and backup infrastructure costs
40
Tiering Demonstration
41
Acopia Demo Topology
ARX500
Windows Client Virtual Volume
V:
Virtual File Systems
VirtualView
Virtual Server
Layer 2 Switch
PhysicalView
Physical File Systems
L: Tier 1 N: Tier 2
42
Load Balancing
43
Load Balancing with Acopia
Solution:– Automatically balances new
file placement across file servers
– Flexible dynamic load balancing algorithms
– Uses existing file storage devices
Benefits:– Increased application
performance
– Improved capacity utilization
– Reduced outages associated with data management
44
Load BalancingA common problem for our customers is applications that require lots of space
– Administrators are reluctant to provision a large file system, because if it ever needs to be recovered it will take too long
– They tend to provision smaller file systems and force the application deal with adding new storage locations
– This typically requires application down time and complexity being added to the application
The ARX can decouple the application from the physical storage so that the application only needs to know a single storage location
– The application will no longer need to deal with multiple storage locations
The storage administrator can now keep file systems small and dynamically add new storage without disruption
– No more down time when capacity thresholds are reached
2 TB 2 TB 2 TB 2 TB 2 TB 2 TB
Application
45
Load Balancing
One or more file systems can be aggregated together into a share-farmWithin the share-farm the ARX can load balance new file creates using the following algorithms– Round Robin, Weighted Round Robin, Latency, Capacity
The ARX will load balance with file level granularity but constraints can be added to keep files and/or directories togetherThe ARX can also maintain free space thresholds for each file system in the share-farm– When a file system crosses the threshold it is removed from the new file
placement algorithmThe ARX can also be setup to automatically migrate file from a file system if a certain free space threshold is not maintained
46
Load Balancing Case Study
Challenges– Infrastructure was a bottleneck to
production of digital content
– Difficult to provision new storage
Solution– ARX6000 cluster
Benefits– Ability to digitize >500% more music
– 20% reduction OPEX costs associated with managing storage
– Reduction in disk spend due to more efficient utilization of existing NAS “Acopia’s products increased
our business workflow by 560%”
Mike Streb, VP Infrastructure, WMG
“Acopia’s products increased our business workflow
by 560%”
Mike Streb, VP Infrastructure, WMG
NetApp 3050
AcopiaARX6000
Compute Nodes
47
Acopia Demo Topology
ARX500
Windows Client
M:
Virtual VolumeV:
Virtual File Systems
Physical File Systems
VirtualView
Virtual Server
Layer 2 Switch
PhysicalView
L: Tier 1 Tier 1 N:
Tier1 - Share Farm
Tier 2
48
Inline Policy Enforcement\Place by Name
49
Inline Policy Enforcement\Place by Name
Classification and placement of data based on name or path
Drivers: – Tiered storage, business
polices, SLA’s for applications or projects, migration based on file type or path
Benefits:– File level granularity
– Can migrate existing file systems to comply with current policy
– Operates inline for real time policy enforcement for new data creation
50
File Based Placement
Filesets– Group of files based on name,
type, extension, string, path, size– Unions, intersections, include and
excldue supportedStorage Tiers– Arbitrary definition defined by the
enterprise– Can consist of a single share or
share farm with capacity balancing
File Placement– Namespace is walked only once
for initial placement of files – In-line policy enforcement will
place files on proper tier in real time
+
51
Demonstration
Inline Policy Enforcement and File Placement by Name
52
Acopia Demo Topology
ARX500
Windows Client
M:
Virtual VolumeV:
Virtual File Systems
Physical File Systems
VirtualView
Virtual Server
Layer 2 Switch
PhysicalView
L: Tier 1 Tier 1 N:
Tier1 - Share Farm
Tier 2
53
Shadow Volume Replication
54
Data Replication with Acopia
Technology:– File-set based replication – NFS & CIFS across multiple
platforms– Replicas may be viewed– Supports multiple targets– Change-based updates only (file
deltas)
Benefits:– Target is not required to be of like
storage type– WAN bandwidth preservation– Can be used for centralized backup
applicationsSecondary SitePrimary Site
IP Network
Applications and Users
ARXCluster
Acopia Global Namespace
Replication
55
World’s largest equipment rental company
Challenges– Upgrade NAS platform
– Introduce lower cost ATA disk
– File-based Disaster Recovery solution
Solution– ARX1000 cluster at primary data center
– ARX1000 at disaster recovery facility
Benefits– NAS upgrade with no impact to users
– 50% savings through use of ATA disk
– Cost effective disaster recovery solution
– Dramatic reduction in backup and replication times
Primary Site
WAN
Disaster Recovery Site
Tier 1
Tier 2
Replica
“Acopia has reduced our total backup and replication times by about 70%.”
Bonnie Stiewing, Senior Systems Administrator, United Rentals
“Acopia has reduced our total backup and replication times by about 70%.”
Bonnie Stiewing, Senior Systems Administrator, United Rentals
Data Replication Case Study
WAN
56
Demonstration
Shadow Volume Replication
57
Acopia Demo Topology
ARX500
Windows Client
N:
Virtual VolumeV:
Virtual File Systems
Physical File Systems
VirtualView
Virtual Server
Layer 2 Switch
PhysicalView
L: Tier 1 Tier 2 O:
Tier1 - Share Farm
Shadow
Replication
M:
Virtual VolumeW:
58