maximize solid state storage and performance with storage...

29
Maximize Solid State Storage and Performance with Storage Foundation Hands-On Lab Course Description Get hands on experience using Storage Foundation to drive performance while ensuring data availability of mission critical applications that rely on the unparalleled performance of in- server Solid State. This lab will walk through multiple ways to configure Storage Foundation today to assist your customer as well as get a sneak preview of advanced features available in the next release of Storage Foundation. Join Symantec and Fusion-IO to learn configuration, best practices, and value propositions of 2-way mirrors, SmartTier, SmartIO and Flexible Storage Sharing. At the end of this lab, you should be able to: Compare performance of SSD and SAN storage devices Use various features and functionality of the new Storage Foundation 6.1 management CLI, sfcache. Configure and manage Multi Volume File Systems (MVFS); including setup, load balancing and adding / removing devices. Configure and apply a data management policy to a MVFS utilizing SmartTier through CLI and VOM. Manage SSD and SAN devices with Storage Foundation in mixed environments. Identify use cases relative to the performance advantages of SSD’s when used with common functions such as mirroring, snap-shots and data placement. Notes: This lab assumes a prerequisite knowledge of Linux, Storage Foundation and general storage concepts. FORWARD-LOOKING STATEMENTS: Any forward-looking indication of plans for products is preliminary and all future release dates are tentative and are subject to change. Any future release of the product or planned modifications to product capability, functionality, or feature are subject to ongoing evaluation by Symantec, and may or may not be implemented and should not be considered firm commitments by Symantec and should not be relied upon in making purchasing decisions.

Upload: others

Post on 19-May-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

 

Maximize Solid State Storage and Performance with Storage Foundation

Hands-On Lab

Course Description   Get hands on experience using Storage Foundation to drive performance while ensuring data availability of mission critical applications that rely on the unparalleled performance of in-server Solid State. This lab will walk through multiple ways to configure Storage Foundation today to assist your customer as well as get a sneak preview of advanced features available in the next release of Storage Foundation. Join Symantec and Fusion-IO to learn configuration, best practices, and value propositions of 2-way mirrors, SmartTier, SmartIO and Flexible Storage Sharing.

At the end of this lab, you should be able to:  

§ Compare performance of SSD and SAN storage devices

§ Use various features and functionality of the new Storage Foundation 6.1 management CLI, sfcache.

§ Configure and manage Multi Volume File Systems (MVFS); including setup, load balancing and adding / removing devices.

§ Configure and apply a data management policy to a MVFS utilizing SmartTier through CLI and VOM.

§ Manage SSD and SAN devices with Storage Foundation in mixed environments.

§ Identify use cases relative to the performance advantages of SSD’s when used with common functions such as mirroring, snap-shots and data placement.

Notes: This lab assumes a prerequisite knowledge of Linux, Storage Foundation and general storage concepts.  

FORWARD-LOOKING STATEMENTS: Any forward-looking indication of plans for products is preliminary and all future release dates are tentative and are subject to change. Any future release of the product or planned modifications to product capability, functionality, or feature are subject to ongoing evaluation by Symantec, and may or may not be implemented and should not be considered firm commitments by Symantec and should not be relied upon in making purchasing decisions.

 

Page 2: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  2  of  29  

Lab Agenda

Lab  Section   Exercise  Name   Duration  

Lab  -­‐  1   Simple  Device  Performance  Comparison    

Lab  -­‐  2   Storage  Foundation  6.1  SmartIO    

Lab  -­‐  3   Storage  Foundation  Multi  Volume  File  Systems    

Lab  -­‐  4   SmartTier  utilizing  Multi  Volume  File  Systems    

 

Lab Layout

Host  Server   Dell  R720  

Virtual  Environment   vSphere  5.1  

RHEL  6.3  virtual  guest  

Software   Storage  Foundation  6.1  (beta)  

Storage  and  Device  Mapping   Devices  

• Fusion  IO;  ioDrive2  cards  

• SAN  storage;  3PAR  

Mapping  

• sda – root partition

• sdb – sdi (3PAR luns)

• sdj – sdl (Fusion IO luns)

Access   W7  desktop  

Browser  Terminal  Session  Access  

• https://sgtw.tso-­‐cloud.com  

• Student  ID  =    tso\ia-­‐lab-­‐xx  

• Student  PW  =    Vi$1On  

• VM  guest  IP  =  10.60.115.1xx  

o User  =  root  

o Password=symc4now  

 Additional  Notes   § The lab will be directed and provide you with step-by-step walkthroughs of key features.

§ Feel free to follow the lab using the instructions on

Page 3: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  3  of  29  

the following pages. You can optionally perform this lab at your own pace.

§ Be sure to ask your instructor any questions you may have.

§ Thank you for coming to our lab session.

 

 

 

 

Page 4: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  4  of  29  

Lab Exercise 1 – Simple Device Performance Comparison The below example is a simple comparison of a Fusion IO card and a SAN virtual drive. This will not represent top speeds due to the student environment and vSphere configuration. Identically sized file systems will be created on 40g SSD and HDD devices. Since SSD devices exhibit excellent performance on read requests, run the read workloads several times in step (3) and performance gaps will widen between the SSD and HDD devices.

1. First you need to create (2) DataGroups, Volumes, Mount Points and File-Systems, one for the SSD and one for the HDD. VxVM’s intelligent device discovery module has the ability to automatically determine the media type of devices from various vendors. If it doesn’t recognize the device you can manually specify an SSD device with the ‘vxdisk –f set sdi mediatype=ssd’ command.

a. vxdisk –e list

b. vxdg init perfDG dev10=sdb

c. vxassist –g perfDG make perfVOL1 39g dev10

d. mkfs –t vxfs /dev/vx/dsk/perfDG/perfVOL1

e. mkdir /hddFS

f. mount –t vxfs /dev/vx/dsk/perfDG/perfVOL1 /hddFS

g. vxdg init perfDG2 dev11=sdi

h. vxassist –g perfDG2 make perfVOL2 39g dev11

i. mkfs –t vxfs /dev/vx/dsk/perfDG2/perfVOL2

j. mkdir /ssdFS

k. mount –t vxfs /dev/vx/dsk/perfDG2/perfVOL2 /ssdFS

Page 5: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  5  of  29  

2. Now use vxbench to generate random write workloads. VxBench is a homegrown benchmark utility that is well suited for testing different types of I/O operations. It allows users to simulate read and write workloads, both sequential and random, on file systems.

a. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w rand_write -i iosize=4k,iocount=20480,maxfilesize=8g /hddFS/hddtest.txt

b. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w rand_write -i iosize=4k,iocount=20480,maxfilesize=8g /ssdFS/ssdtest.txt

3. Use vxbench to generate random read workloads on the files created in the previous step.

a. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w rand_read -i iosize=4k,iocount=20480 -c direct /hddFS/hddtest.txt

b. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w rand_read -i iosize=4k,iocount=20480 -c direct /ssdFS/ssdtest.txt

c. vxstat –o alldgs –sv

Page 6: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  6  of  29  

d. Optional Exercise: Re-run commands “a” and “b” multiple times to get

more insight into the performance differences between SSD and HDD.

4. Cleanup

a. umount /hddFS

b. rmdir /hddFS

c. umount /ssdFS

d. rmdir /ssdFS

e. vxdg destroy perfDG

f. vxdg destroy perfDG2

Page 7: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  7  of  29  

Lab Exercise 2 – VxFS and VxVM Caching Applications with high throughput and low latency needs can exploit ‘Fusion IO’ Solid State devices to significantly accelerate I/O throughput using SmartIO within Symantec Storage Foundation. SmartIO extends the existing storage management capabilities of Storage Foundation on Fusion IO SSD’s by adding flexibility in managing the I/O performance characteristics of applications that utilize that device. SmartIO provides fine-grained caching at the volume, file, directory and file system levels to accelerate the reads and writes of any application within Storage Foundation.

This session will walk through various features and functionality of the new management CLI, sfcache. SmartIO works across all levels of Storage Foundation; to that end, sfcache is the first product wide CLI.

VxFS Caching 1. Create a test DG, Volume, mount point and file system

a. vxdg init betaDG dev9=sdb

b. vxassist -g betaDG make betaVOL1 10g dev9

c. mkfs -t vxfs /dev/vx/dsk/betaDG/betaVOL1

d. mkdir /betaFS

e. mount -t vxfs /dev/vx/dsk/betaDG/betaVOL1 /betaFS

f. vxdisk -e list

g. df -h

2. Use the vxbench utility to generate a file for cache testing.

Page 8: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  8  of  29  

a. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w rand_write -i iosize=4k,iocount=20480,maxfilesize=8g /betaFS/btest.txt

3. Use the vxbench utility to generate read activity on the file created in Step 2, and use vxstat to look at the performance.

a. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w rand_read -i iosize=4k,iocount=20480 -c direct /betaFS/btest.txt

b. vxstat –o alldgs –sv

4. Enable File System level caching on a Storage Foundation volume using the Fusion IO device (/dev/sdi)

a. sfcache create sdi

b. sfcache list

5. Verify the /betaFS file system is enabled for SmartIO

a. sfcache list sfcachearea_1

6. Use the same vxbench command from step 3 to generate read activity on the file /betaFS/btest.txt, and look at the performance numbers.

a. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w rand_read -i iosize=4k,iocount=20480 -c direct /betaFS/btest.txt

b. vxstat –o alldgs –sv

Page 9: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  9  of  29  

7. Use the sfcache cli to get statistics on the cache.

a. sfcache stat /betaFS

i. Note: The first reads will “warm” the cache, so the initial

performance may not be substantially impacted as those first reads will continue to be served by the back-end SAN.

b. Optional Exercise: run the read generating vxbench command multiple times and see how performance is impacted.

i. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w rand_read -i iosize=4k,iocount=20480 -c direct /betaFS/btest.txt

ii. sfcache stat /betaFS

Page 10: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  10  of  29  

8. Clear the cached data from /betaFS

a. sfcache purge /betaFS

b. sfcache stat /betaFS

9. Delete the VxFS cache area created in Step 1

a. sfcache offline sfcachearea_1

b. sfcache delete sfcachearea_1

10. Optional Exercise: If you are interested in the new sfcache CLI that will be part of the SFHA 6.1 product, here are a few more exercises.

a. Create a 1GB, VxFS cache area called “fs1” and verify it is enabled

i. sfcache create fs1 1g sdi

ii. sfcache list

iii. sfcache list fs1

b. Run through similar exercises in steps 3-8 above to generate load

c. Disable caching on /betaFS

i. sfcache disable /betaFS

Page 11: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  11  of  29  

ii. sfcache list fs1

d. Delete the VxFS cache area “fs1” above

i. sfcache offline fs1

ii. sfcache delete fs1

VxVM Caching 1. Remove the /betaFS filesystem

a. umount /betaFS

b. rmdir /betaFS

2. Enable volume level caching on all Storage Foundation volumes

a. sfcache create vmc1 –t vxvm sdi

b. sfcache list

3. Generate load on the volume using “DD”

a. dd if=/dev/vx/rdsk/betaDG/betaVOL1 of=/dev/null bs=65536 count=10

b. sfcache stat vmc1

Note: On the first read, ALL data comes from the back-end SAN, so we expect the cache to be “0”

-ART = Average Read Time c. dd if=/dev/vx/rdsk/betaDG/betaVOL of=/dev/null bs=65536 count=10

Page 12: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  12  of  29  

d. sfcache stat vmc1

Note: 1 of 2 reads comes from the cache, making the cache hit ratio 50%

e. Optional: Continue to run “dd” commands to generate more I/O from our volume and see how it impacts performance and the cache hit ratio.

4. Now shrink the newly created cache area by 50%

a. sfcache resize 20g vmc1

b. sfcache list

5. Stop the caching on betaVOL1

a. sfcache disable betaDG/betaVOL1

b. sfcache list vmc1

6. Then stop the caching on the “cache area”

a. sfcache offline vmc1

b. sfcache list

Page 13: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  13  of  29  

7. Optional: If you are interested in the new sfcache CLI that will be part of the SFHA 6.1 product, here are a few more exercises.

a. Re-enable the “cache area”, only for manual caching not automatic

i. sfcache set vmc1 –-noauto

ii. sfcache online vmc1

iii. sfcache list

b. Now verify that no volumes are being cached

i. sfcache list vmc1

c. Enable and verify caching on betaDG/betaVOL1

i. sfcache enable betaDG/betaVOL1

ii. sfcache list vmc1

d. Destroy the volume level cache area

i. sfcache offline vmc1

ii. sfcache delete vmc1

8. Clean-up; un-mount /betaFS and destroy disk group betaDG

a. umount /betaFS

b. vxdg destroy betaDG

Page 14: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  14  of  29  

Lab Exercise 3 – Storage Foundation MVFS Multi-volume file systems are file systems that occupy two or more virtual volumes. The collection of volumes is known as a volume set, and is made up of disks or LUNs belonging to a single VxVM disk group. A multi-volume file system presents a single name space making it transparent to users and applications. When used in conjunction with SmartTier you can define policies to match data storage with data usage requirements.

The following lab is an example of utilizing MVFS to load balance and control the placement of metadata and normal data across several 3g volumes. VxVM and VsFS provide support for multi-volume file systems. MVFS allows file systems to reside on different classes of devices, so a file system can be comprised of both expensive and inexpensive disks or devices. With MVFS a user can control which data goes on which volume type.

1. Create a diskgroup and add (1) Fusion IO SSD and (3) HDD SAN luns. Later within the lab we will define how each device type is used.

a. vxdisk –e list

b. vxdg init mvfsdg1 dev1=sdj

c. vxdg -g mvfsdg1 adddisk dev2=sdc

d. vxdg -g mvfsdg1 adddisk dev3=sdd

e. vxdg -g mvfsdg1 adddisk dev4=sde

f. vxdg list

g. vxprint

Page 15: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  15  of  29  

2. Create several (3g) volumes within the ‘mvfsdg1’ disk group as shown below using the vxassist command. Each volume is associated with a specific device.

a. vxassist -g mvfsdg1 make vol1 3g dev1

b. vxassist -g mvfsdg1 make vol2 3g dev2

c. vxassist -g mvfsdg1 make vol3 3g dev3

d. vxassist -g mvfsdg1 make vol4 3g dev4

e. vxprint

3. Create a volume set called ‘dev-vset’ and assign to the ‘mvfsdg1’ and add ‘vol1’ using the vxvset command. As stated earlier a volume set (VSET) construct in VxVM allows multiple volumes to be grouped together and utilized as a single unit. This is transparent to the user.

Once the VSET is created add vol2 – vol4 and display / verify the results.

a. vxvset -g mvfsdg1 -t vxfs make dev-vset vol1

b. vxvset -g mvfsdg1 addvol dev-vset vol2

c. vxvset -g mvfsdg1 addvol dev-vset vol3

d. vxvset -g mvfsdg1 addvol dev-vset vol4

e. vxvset list

f. vxvset -g mvfsdg1 list dev-vset

Page 16: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  16  of  29  

4. Now the volume set is created, create a VxFS file system by specifying the volume set name as an argument to ‘mkfs’.

a. mkfs -t vxfs /dev/vx/dsk/mvfsdg1/dev-vset

5. Create a directory called ‘mvfsmnt1’ and mount it to /dev/vx/dsk/mvfsdg1/dev-vset.

a. mkdir /mvfsmnt1

b. mount -t vxfs /dev/vx/dsk/mvfsdg1/dev-vset /mvfsmnt1

c. df

6. Now we will add some data using the vxbench utility. Once completed use the ‘fsvoladm’ command to list the file distribution across the volumes.

a. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w write -i iosize=4k,iocount=10240,maxfilesize=2g /mvfsmnt1/testFS

b. fsvoladm -H list /mvfsmnt1

Page 17: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  17  of  29  

7. At this point we can define the load balancing and metadata placement. The data policy name in the example below is ‘loadbal’ and the metapolicy name is ‘meta’. The data will be balanced across all volumes in 2m units and the metadata will be restricted to vol1, the Fusion IO SSD card. Storing metadata on an SSD storage device can greatly improve overall file system performance since most metadata is accessed in a random fashion. SSD storage provides huge advantages over HDD devices for random I/O, especially reads. Metadata can include extents allocated to structural files, blocks reserved for the super block, the volume label, indirect extents, File Change Log extents, the history log file, access control lists and so on.

a. fsapadm define -o balance -c 2m /mvfsmnt1 loadbal vol1 vol2 vol3 vol4

b. fsapadm define /mvfsmnt1 meta vol1

c. fsvoladm queryflags /mvfsmnt1/

8. To illustrate the flexibility of MVFS; the examples below shows how to control the type of data that can be configured to each device. So if you had multiple ‘ssd’ devices in a volume-set you could target metadata to each.

a. fsvoladm clearflags dataonly /mvfsmnt1/ vol3

b. fsvoladm queryflags /mvfsmnt1/

c. fsvoladm clearflags metadataok /mvfsmnt1/ vol3

d. fsvoladm queryflags /mvfsmnt1/

9. Now we assign and enforce the policies. Once done verify the policy was applied.

a. fsapadm assignfile /mvfsmnt1/testFS loadbal meta

b. fsapadm enforcefile -f strict /mvfsmnt1/testFS

c. fsvoladm -H list /mvfsmnt1/testFS

Page 18: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  18  of  29  

10. The following shows the data will be redistributed across the surviving volumes when vol3 is gracefully removed. As shown below the data from vol3 has been moved to vol4. It is not rebalanced evenly until the ‘loadbal’ policy is redefined omitting vol3. It would look like {fsapadm define –o balance –c 2m /mvfsmnt1 loadbal vol1 vol2 vol4}. In a production environment you would utilize features such as mirroring and snapshots to protect against device failure. You can even mirror a SSD device to a SAN device; Storage Foundation will automatically use the SSD for preferred reads.

a. fsvoladm remove /mvfsmnt1 vol3

b. fsvoladm -H list /mvfsmnt1/testFS

11. Add a volume and rebalancing. Now add vol3 back to /mvfsmnt1; balance defined and enforced. Display contents pre and post.

a. fsvoladm add /mvfsmnt1 vol3 3g

b. fsvoladm -H list /mvfsmnt1/testFS

c. fsapadm define -o balance -c 2m /mvfsmnt1 loadbal vol1 vol2 vol3 vol4

d. fsapadm enforcefile -f strict /mvfsmnt1/testFS

e. fsvoladm -H list /mvfsmnt1/testFS

Page 19: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  19  of  29  

12. Remove the associated policy from the volume set and delete file /mvfsmnt1/testFS; mount point /mvfsmnt1 it will be re-used in lab exercise 4.

a. fsapadm delete /mvfsmnt1 loadbal

b. rm /mvfsmnt1/testFS

Student Notes:

Page 20: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  20  of  29  

Lab Exercise 4 – SmartTier and MVFS As shown in the previous lab, Multi-volume file systems are file systems that occupy two or more virtual volumes. Veritas File System (VxFS) uses multi-tier online storage by way of the SmartTier feature, which functions on top of the multi-volume file system. A collection of volumes is known as a volume-set (VSET). The volume-set is made up of disks or disk volumes belonging to a single Veritas Volume Manager (VxVM) disk group. Each volume retains a separate identity for administration purposes, making it possible to control locations to which individual files are directed. Storage tiers are defined for each device within the MVFS. Placement policies control both initial file location and the circumstances under which existing files are relocated. Placement classes are defined within a policy and files are relocated to different classes when they meet specific naming, timing, access rate, and storage capacity related conditions. The defined placement classes are associated to storage tiers.

The following lab is an example of applying a simply policy with SmartTier. There are several options such as data aging and I/O activity. Fusion IO cards are incorporated and defined as Tier 1. That way new or heavily used data can take advantage of the high performance storage.

SmartTier and simple policy management can be administered through VOM. In this exercise the CLI will be used with a pre-defined placement policy.

Note: Only run steps 1-5 if you did not complete Lab Exercise 3

1. Create a diskgroup and add (1) Fusion IO SSD and (3) HDD SAN luns

a. vxdisk –e list

b. vxdg init mvfsdg1 dev1=sdj

c. vxdg -g mvfsdg1 adddisk dev2=sdc

d. vxdg -g mvfsdg1 adddisk dev3=sdd

e. vxdg -g mvfsdg1 adddisk dev4=sde

f. vxdg list

Page 21: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  21  of  29  

2. Create several (3g) volumes within the ‘mvfsdg1’ disk group as shown below using the vxassist command.

a. vxassist -g mvfsdg1 make vol1 3g dev1

b. vxassist -g mvfsdg1 make vol2 3g dev2

c. vxassist -g mvfsdg1 make vol3 3g dev3

d. vxassist -g mvfsdg1 make vol4 3g dev4

e. vxprint

3. Create a volume set called ‘dev-vset’ and assign to the ‘mvfsdg1’ and add ‘vol1’ using the vxvset command. Then add vol2 – vol4 and display / verify the results.

a. vxvset -g mvfsdg1 -t vxfs make dev-vset vol1

Page 22: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  22  of  29  

b. vxvset -g mvfsdg1 addvol dev-vset vol2

c. vxvset -g mvfsdg1 addvol dev-vset vol3

d. vxvset -g mvfsdg1 addvol dev-vset vol4

e. vxvset list

f. vxvset -g mvfsdg1 list dev-vset

4. Now the volume set is created, create a VxFS file system by specifying the volume set name as an argument to ‘mkfs’.

a. mkfs -t vxfs /dev/vx/dsk/mvfsdg1/dev-vset

5. Create a directory called ‘mvfsmnt1’ and mount it to /dev/vx/dsk/mvfsdg1/dev-

vset.

a. mkdir /mvfsmnt1

b. mount -t vxfs /dev/vx/dsk/mvfsdg1/dev-vset /mvfsmnt1

6. Set the tiering (volume / disk) unit placement tags; tier1 is for the Fusion IO SSD device and tier2 and tier3 are SAN units; verify the results. Placement policies utilize the defined volume and tier classes.

a. vxassist –g mvfsdg1 settag vol1 vxfs.placement_class.tier1

b. vxassist –g mvfsdg1 settag vol2 vxfs.placement_class.tier2

c. vxassist –g mvfsdg1 settag vol3 vxfs.placement_class.tier2

d. vxassist –g mvfsdg1 settag vol4 vxfs.placement_class.tier3

Page 23: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  23  of  29  

e. vxassist –g mvfsdg1 listtag vol1 vol2 vol3 vol4

7. Now apply a pre-configured policy setting; this is an aged based policy that moves data between the tiers per defined age and access. The policy is set to look for a file extension of (.txt); these will be designated as ‘Key-File-Rules’. All other types will be treated as ‘Normal-File-Rules’ or ‘Low-File-Rules’. You can specify different aging rules for different types of extensions with this policy example. At the end of this lab are images from the VOM of the creation of the dev_policy.xml used in the below example.

a. fsppadm assign /mvfsmnt1 /opt/VRTSvxfs/etc/dev_policy.xml

b. fsppadm list /mvfsmnt1

c. fsppadm print /mvfsmnt1 *this will list out the policy variables*

d. fsppadm analyze /mvfsmnt1

e. fsppadm query /mvfsmnt1

8. Create data files under directory /mvfsmnt1 utilizing the vxbench utility with the

‘Key’ file extension of (*.txt).

a. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w write -i iosize=4k,iocount=10240,maxfilesize=2g /mvfsmnt1/testFS1.txt

b. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w write -i iosize=4k,iocount=10240,maxfilesize=2g /mvfsmnt1/testFS2.txt

Page 24: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  24  of  29  

c. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w write -i iosize=4k,iocount=10240,maxfilesize=2g /mvfsmnt1/testFS3.txt

d. /opt/VRTSspt/FS/VxBench/vxbench_rhel6_x86_64 -w write -i iosize=4k,iocount=10240,maxfilesize=2g /mvfsmnt1/testFS4.txt

e. fsppadm analyze /mvfsmnt1

f. fsppadm query /mvfsmnt1

g. Run the fsppadm command to show the value specified within the

placement policy. You can also utilize ‘view’ on the actual file located in the /opt/VRTSvxfs/etc directory. There are several other examples of data placement usage policys with this directory.

i. fsppadm print /mvfsmnt1 (portion of output listed below)

Page 25: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  25  of  29  

9. The images below are from the File Placement Policy Wizard within VOM and the old VEAGUI interface. Policies can either be created through the Wizard or manually by using the examples in /opt/VRTSvxfs/etc.

a. Select File Placement Policy

b. Now you would select the type of policy; for this lab the ‘Access aged-based’ was used.

Page 26: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  26  of  29  

c. Then select the disk or storage tiers that were defined with the vxassist settag command.

d. Specify the file management criteria.

Page 27: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  27  of  29  

e. Now the relocation specific parameters set; Key-File-Rules.

f. Then specify Lowest-File-Rules; for dump and log files.

Page 28: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  28  of  29  

g. The Wizard will generate the following summary.

h. Name and commit the policy.

FORWARD-LOOKING STATEMENTS: Any forward-looking indication of plans for products is preliminary and all future release dates are tentative and are subject to change. Any future release of the product or planned modifications to product capability, functionality, or feature are subject to ongoing evaluation by Symantec, and may or may not be implemented and should not be considered firm commitments by Symantec and should not be relied upon in making purchasing decisions.

Page 29: Maximize Solid State Storage and Performance with Storage ...vox.veritas.com/legacyfs/online/veritasdata/IA L05.pdf · Maximize Solid State Storage and Performance with Storage Foundation

  29  of  29  

Student Notes:

Symantec links:

Using Dynamic Storage Tiering

http://www.symantec.com/page.jsp?id=yellowbooks