virtualsysadminday-storagemanagementwithlvm2

28
Storage management with LVM2, mdadm and device mapper Storage management with LVM2, mdadm and device mapper Storage management with LVM2, mdadm and device mapper Introduction Exercise: Preparing block devices for LVM2 use Exercise: Creating physical volumes and volume groups Exercise: Creating logical volumes and file systems Exercise: Creating and checking ext4 file systems (mkfs/fsck) Exercise: Mounting file systems at bootup time Exercise: Creating a snapshot volume Exercise: Increasing logical volumes and file systems Exercise: Adding a physical volume to a volume group Exercise: Removing/replacing a physical volume Exercise: Setting up a RAID1 device using mdadm Exercise: Setting up Encryption using dm_crypt and LUKS Introduction In the early days, Linux could only be installed on fixed hard disk partitions (primary or logical partitions on PCs), which are usually hard to change after the fact, especially if Linux had to “live” alongside another operating system (e.g. Microsoft Windows) on the same hard disk drive. Making changes to the existing partitioning layout usually involved using proprietary tools like Partition Magic or biting the bullet and re-installing everything from scratch after changing the partition configuration. Also, it was not possible to create file systems that could span across several physical devices or to provide redundancy (RAID) or encryption. With the introduction of the (DM) and , the logical volume manager for Linux several years ago, Linux provides very Linux device mapper LVM2 powerful and much more flexible support for managing storage. DM provides an abstraction layer on top of the actual storage block devices and provides the foundation for LVM2, RAID, encryption and other features. Linux LVM2 provides features like growing volumes, adding additional block devices, moving volumes between storage devices. Cluster volume manager supports working with shared storage devices (e.g. SANs). Block devices are arranged as physical volumes that can be grouped into volume groups. Logical volumes are created within the volume groups. File systems are created on top of the logical volumes, like on a regular disk partition. Volume Groups and Logical Volumes can be named individually for easy addressing/organizing storage. The following picture illustrates a possible LVM configuration: In addition to logical volume management with LVM2, the Linux kernel supports “software-RAID” with the MD (multiple devices) driver. MD

Upload: stepserg

Post on 11-Feb-2016

212 views

Category:

Documents


0 download

DESCRIPTION

oracle

TRANSCRIPT

Page 1: virtualsysadminday-StoragemanagementwithLVM2

Storage management with LVM2, mdadm and devicemapper

Storage management with LVM2, mdadm and device mapperStorage management with LVM2, mdadm and device mapper

IntroductionExercise: Preparing block devices for LVM2 useExercise: Creating physical volumes and volume groupsExercise: Creating logical volumes and file systemsExercise: Creating and checking ext4 file systems (mkfs/fsck)Exercise: Mounting file systems at bootup timeExercise: Creating a snapshot volumeExercise: Increasing logical volumes and file systemsExercise: Adding a physical volume to a volume groupExercise: Removing/replacing a physical volumeExercise: Setting up a RAID1 device using mdadmExercise: Setting up Encryption using dm_crypt and LUKS

Introduction

In the early days, Linux could only be installed on fixed hard disk partitions (primary or logical partitions on PCs), which are usually hard to changeafter the fact, especially if Linux had to “live” alongside another operating system (e.g. Microsoft Windows) on the same hard disk drive. Makingchanges to the existing partitioning layout usually involved using proprietary tools like Partition Magic or biting the bullet and re-installingeverything from scratch after changing the partition configuration. Also, it was not possible to create file systems that could span across severalphysical devices or to provide redundancy (RAID) or encryption.

With the introduction of the (DM) and , the logical volume manager for Linux several years ago, Linux provides veryLinux device mapper LVM2powerful and much more flexible support for managing storage. DM provides an abstraction layer on top of the actual storage block devices andprovides the foundation for LVM2, RAID, encryption and other features.

Linux LVM2 provides features like growing volumes, adding additional block devices, moving volumes between storage devices. Cluster volumemanager supports working with shared storage devices (e.g. SANs). Block devices are arranged as physical volumes that can be grouped intovolume groups. Logical volumes are created within the volume groups. File systems are created on top of the logical volumes, like on a regulardisk partition. Volume Groups and Logical Volumes can be named individually for easy addressing/organizing storage.

The following picture illustrates a possible LVM configuration:

In addition to logical volume management with LVM2, the Linux kernel supports “software-RAID” with the MD (multiple devices) driver. MD

Page 2: virtualsysadminday-StoragemanagementwithLVM2

organizes disk drives into RAID arrays (providing different RAID levels), including fault management.

This lab session will walk you through the basic uses of LVM2, MD RAID and encryption with dm_crypt device mapper module on the commandline.

To avoid messing up the operating system itself, we created two additional virtual disk drives that will be used for these lab exercises.

These two additional virtual SATA disks should appear as SCSI disk drives and in addition to the primary disk drive/dev/sdb /dev/sdccontaining the operating system ( ) in the booted guest system./dev/sda

To verify, check the output of the kernel boot messages:

[oracle@oraclelinux6 ~]$ dmesg | grep "sd "

sd 2:0:0:0: [sda] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB)sd 2:0:0:0: [sda] Write Protect is offsd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUAsd 3:0:0:0: [sdb] 8388608 512-byte logical blocks: (4.29 GB/4.00 GiB)sd 3:0:0:0: [sdb] Write Protect is offsd 3:0:0:0: [sdb] Mode Sense: 00 3a 00 00sd 3:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUAsd 4:0:0:0: [sdc] 8388608 512-byte logical blocks: (4.29 GB/4.00 GiB)sd 4:0:0:0: [sdc] Write Protect is offsd 4:0:0:0: [sdc] Mode Sense: 00 3a 00 00sd 4:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUAsd 4:0:0:0: [sdc] Attached SCSI disksd 3:0:0:0: [sdb] Attached SCSI disksd 2:0:0:0: [sda] Attached SCSI disksd 2:0:0:0: Attached scsi generic sg1 type 0sd 3:0:0:0: Attached scsi generic sg2 type 0sd 4:0:0:0: Attached scsi generic sg3 type 0

You can also use the command or read the content of the file to list all connected SATA/SCSI devices:lsscsi /proc/scsi/scsi

[oracle@oraclelinux6 ~]$ lsscsi

[1:0:0:0] cd/dvd VBOX CD-ROM 1.0 /dev/sr0[2:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sda[3:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sdb[4:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sdc

[oracle@oraclelinux6 ~]$ cat /proc/scsi/scsi

Attached devices:Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: VBOX Model: CD-ROM Rev: 1.0 Type: CD-ROM ANSI SCSI revision: 05Host: scsi2 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: VBOX HARDDISK Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 05Host: scsi3 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: VBOX HARDDISK Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 05Host: scsi4 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: VBOX HARDDISK Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 05

Now that we have verified that we have two additional disk drives for our experiments, let's get going with the LVM2 configuration.

Exercise: Preparing block devices for LVM2 use

While it's possible to use entire disk drives without any partitioning information with LVM2, it's usually a good idea to create one big primarypartition that spans the entire disk. LVM2 partitions use a dedicated partition ID that makes it easier to determine disk drives that are included inan LVM2 setup.

We will therefore first partition the two additional disks by creating large primary partitions that spans the entire disk. We'll also choose LinuxLVM2 (Hex code "8e") as the partition ID.

On Linux, you can either use various tools like , or for that – the following example uses to create the diskfdisk cfdisk parted fdisk

Page 3: virtualsysadminday-StoragemanagementwithLVM2

partition:

[oracle@oraclelinux6 ~]$ sudo fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel with disk identifier 0xcd14f5f9.Changes will remain in memory only, until you decide to write them.After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').

Command (m for help): n

Command action e extended p primary partition (1-4)

pPartition number (1-4): 1First cylinder (1-522, default 1): 1Last cylinder, +cylinders or +size{K,M,G} (1-522, default 522): 522

Command (m for help): tSelected partition 1Hex code (type L to list codes): 8eChanged system type of partition 1 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sdb: 4294 MB, 4294967296 bytes255 heads, 63 sectors/track, 522 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0xcd14f5f9

Device Boot Start End Blocks Id System/dev/sdb1 1 522 4192933+ 8e Linux LVM

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.Syncing disks.

NoteRepeat the procedure above to partition the second disk drive ( ) in the same way./dev/sdc

Exercise: Creating physical volumes and volume groups

Now that we've prepared the block devices, we need to make them “known” to LVM2 as a physical volumes. This is done using the pvcreatetool. It initializes one or more physical volumes for later use by the Logical Volume Manager. Each volume can be a disk partition, an entire disk,meta device (e.g. a RAID array), or loopback file.

Initialize the two disk drives for use by LVM2:

Page 4: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo pvcreate -v /dev/sdb1 /dev/sdc1

Set up physical volume for "/dev/sdb1" with 8385867 available sectors Zeroing start of device /dev/sdb1 Writing physical volume data to disk "/dev/sdb1" Physical volume "/dev/sdb1" successfully created Set up physical volume for "/dev/sdc1" with 8385867 available sectors Zeroing start of device /dev/sdc1 Writing physical volume data to disk "/dev/sdc1" Physical volume "/dev/sdc1" successfully created

The option makes the output more verbose, so you can see what the command is actually doing. You can use to print all known-v pvdisplayphysical volumes:

[oracle@oraclelinux6 ~]$ sudo pvdisplay

--- Physical volume --- PV Name /dev/sda2 VG Name vg_oraclelinux6 PV Size 7.51 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 1922 Free PE 0 Allocated PE 1922 PV UUID VnESLQ-yehh-35Yg-KJ90-l8z4-FhrE-FFjoIr

--- Physical volume --- PV Name /dev/sda3 VG Name vg_oraclelinux6 PV Size 2.00 GiB / not usable 4.73 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 510 Free PE 0 Allocated PE 510 PV UUID bgCPow-IlA3-3Vkw-ueJv-Lfip-cI12-mfnug8

"/dev/sdb1" is a new physical volume of "4.00 GiB" --- NEW Physical volume --- PV Name /dev/sdb1 VG Name PV Size 4.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID EyXQq7-jc1X-tZbX-Btmo-YHKh-06Dt-5JHrc2

"/dev/sdc1" is a new physical volume of "4.00 GiB" --- NEW Physical volume --- PV Name /dev/sdc1 VG Name PV Size 4.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID ebngT2-fMuj-3PEB-0gGc-VegQ-PLOI-VcIgCR

In the example above, you will notice that the base operating system is also installed on top of LVM2, the second and third partition of the firstdisk drive (/dev/sda2 and /dev/sda3) belong to the volume group “vg_oraclelinux6”.

As an alternative to the above, the command displays all available PVs in a more condensed form:pvs

Page 5: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo pvs

PV VG Fmt Attr PSize PFree /dev/sda2 vg_oraclelinux6 lvm2 a-- 7.51g 0 /dev/sda3 vg_oraclelinux6 lvm2 a-- 1.99g 0 /dev/sdb1 lvm2 a-- 4.00g 4.00g /dev/sdc1 lvm2 a-- 4.00g 4.00g

We now have two additional physical volumes that we can assign to an existing or a completely new volume group. We will start with using justone of the two additional physical volumes for the first examples. Later, the second volume will come into play, too.

You can now use the command to create a new volume group on the physical volume(s). Space in a volume group is divided intovgcreate“extents”, chunks of space that are allocated at once. The default is 4 MB. The basic syntax is:

vgcreate -v <volume group name> <device>

It's possible to provide more than one physical device here, to create a volume group that spans across multiple physical volumes. Again, the -voption makes the command's execution a bit more verbose so we can see what's going on. Now let's create a new volume group “myvolg” onphysical volume :/dev/sdb1

[oracle@oraclelinux6 ~]$ sudo vgcreate -v myvolg /dev/sdb1

Wiping cache of LVM-capable devices Adding physical volume '/dev/sdb1' to volume group 'myvolg' Archiving volume group "myvolg" metadata (seqno 0). Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 1). Volume group "myvolg" successfully created

The command “vgdisplay” will list all known volume groups in the system. Note how our new volume group is there, too:

Page 6: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo vgdisplay

--- Volume group --- VG Name myvolg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 4.00 GiB PE Size 4.00 MiB Total PE 1023 Alloc PE / Size 0 / 0 Free PE / Size 1023 / 4.00 GiB VG UUID Tb30rU-AcHP-Cfvq-2cfH-jMa0-NOF1-DuGMvz

--- Volume group --- VG Name vg_oraclelinux6 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 2 Act PV 2 VG Size 9.50 GiB PE Size 4.00 MiB Total PE 2432 Alloc PE / Size 2432 / 9.50 GiB Free PE / Size 0 / 0 VG UUID 0tE3oy-Jylq-PABw-mPQf-Cl9Z-2pqz-zU02su

An alternative short form is using the command, which displays the known volume groups in a more condensed fashion:vgs

[oracle@oraclelinux6 ~]$ sudo vgs

VG #PV #LV #SN Attr VSize VFree myvolg 1 0 0 wz--n- 4.00g 4.00g vg_oraclelinux6 2 2 0 wz--n- 9.50g 0

Using this command you can quickly get an overview of your LVM setup and it's also particularly suitable to be used inside of shell scripts. Checkthe vgs(8) man page for more details.

In our example you can see that the volume group consists of two physical volumes (#PV), contains two logical volumesvg_oraclelinux6(#LV) and has no free space left for additional logical volumes (VFree=0). Our newly created volume group consists of one physicalmyvolgvolume, contains no logical volumes yet and has 4 gigabytes of free space available.

Storage space in LVM2 is divided into so-called “extents” – this is the smallest logical unit a volume can be made of. By default, vgcreatechooses a physical extent size of 4 megabytes, but you can change this by using the option, depending on your--physicalextentsizestorage requirements.

Exercise: Creating logical volumes and file systems

Now that we've created our volume group, we can finally go ahead and create our logical volumes inside. As you have probably guessed by now,the tool creates a logical volume inside an existing volume group. It supports a large number of options, this is the basic usage:lvcreate

lvcreate --size <size> --name <logical volume name> <volume group name>

Page 7: virtualsysadminday-StoragemanagementwithLVM2

The option defines the size of the logical volume, by allocating the respective amount of logical extents from the free physical extent pool--sizeof that volume group.

This will create a new logical volume in the given volume group. LVM2 automatically creates the appropriate block device nodes (named ,dm-xwhere "x" is a sequence number) in the subdirectory. Additionally LVM2 creates named entries for each volume:/dev

/dev/mapper/<volume group name>-<logical volume name>

/dev/<volume group name>/<logical volume name>

These are symbolic links that point to the device node.dm-x

Let's create a logical volume named inside of the volume group, with a size of 2 gigabytes:myvol myvolg

[oracle@oraclelinux6 ~]$ sudo lvcreate -v --size 2g --name myvol myvolg

Setting logging type to disk Finding volume group "myvolg" Archiving volume group "myvolg" metadata (seqno 1). Creating logical volume myvol Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 2). Found volume group "myvolg" activation/volume_list configuration setting not defined: Checking only host tags formyvolg/myvol Creating myvolg-myvol Loading myvolg-myvol table (252:2) Resuming myvolg-myvol (252:2) Clearing start of logical volume "myvol" Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 2). Logical volume "myvol" created

Now let's take a look at the existing logical volumes:

Page 8: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo lvdisplay

--- Logical volume --- LV Path /dev/myvolg/myvol LV Name myvol VG Name myvolg LV UUID igrKHo-IdMv-rECU-b3ju-xQde-FV3U-ffabjx LV Write Access read/write LV Creation host, time oraclelinux6.localdomain, 2013-01-09 01:09:43 -0800 LV Status available # open 0 LV Size 2.00 GiB Current LE 512 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:2

--- Logical volume --- LV Path /dev/vg_oraclelinux6/lv_root LV Name lv_root VG Name vg_oraclelinux6 LV UUID l4kAq3-ahhE-cw8Y-D0G4-fkml-8W4X-kgEJmd LV Write Access read/write LV Creation host, time , LV Status available # open 1 LV Size 7.53 GiB Current LE 1928 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0

--- Logical volume --- LV Path /dev/vg_oraclelinux6/lv_swap LV Name lv_swap VG Name vg_oraclelinux6 LV UUID 1olLkX-fTZ0-X79l-eDJo-9b6L-pLmp-Pp8dpm LV Write Access read/write LV Creation host, time , LV Status available # open 2 LV Size 1.97 GiB Current LE 504 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1

[oracle@oraclelinux6 ~]$ sudo lvs

LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert myvol myvolg -wi-a--- 2.00g lv_root vg_oraclelinux6 -wi-ao-- 7.53g lv_swap vg_oraclelinux6 -wi-ao-- 1.97g

The logical volume has been created. LVM2 and the device mapper also created the corresponding block device nodes in for us:myvol /dev

[oracle@oraclelinux6 ~]$ ls -l /dev/mapper/myvolg-myvol /dev/myvolg/myvol /dev/dm-2

brw-rw---- 1 root disk 252, 2 Jan 18 16:19 /dev/dm-2lrwxrwxrwx 1 root root 7 Jan 18 16:19 /dev/mapper/myvolg-myvol -> ../dm-2lrwxrwxrwx 1 root root 7 Jan 18 16:19 /dev/myvolg/myvol -> ../dm-2

The free space in our volume group has also been reduced and the number of logical volumes has been updated:

Page 9: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo vgs

VG #PV #LV #SN Attr VSize VFree myvolg 1 1 0 wz--n- 4.00g 2.00g vg_oraclelinux6 2 2 0 wz--n- 9.50g 0

By the way, it's possible to rename existing volume groups or logical volumes, using the and commands:vgrename lvrename

vgrename <old VG name> <new VG name>

lvrename <volume group> <old LV name> <new LV name>

Exercise: Creating and checking ext4 file systems (mkfs/fsck)

Now that we've created a logical volume, we can treat it as any other block device and create a file system on top of it. In this example, we'reusing the ext4 file system, but you are free to use any other file system like XFS, Btrfs or ReiserFS, of course.

[oracle@oraclelinux6 ~]$ sudo mkfs.ext4 -v /dev/mapper/myvolg-myvol

mke2fs 1.41.12 (17-May-2010)fs_types for mke2fs.conf resolution: 'ext4', 'default'Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks131072 inodes, 524288 blocks26214 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=53687091216 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912

Writing inode tables: doneCreating journal (16384 blocks): doneWriting superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 31 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.

After the file system has been created, you need to mount it somewhere in your directory structure in order to be able to access it. First you needto create a new empty directory that will act as the “mount point”, then you mount the file system to this location. We'll be creating a new topleveldirectory named in the exercise below:/myvol

[oracle@oraclelinux6 ~]$ sudo mkdir -v /myvolmkdir: created directory `/myvol'[oracle@oraclelinux6 ~]$ sudo mount -v -t ext4 /dev/myvolg/myvol /myvol/dev/mapper/myvolg-myvol on /myvol type ext4 (rw)[oracle@oraclelinux6 ~]$ df -h /myvol

Filesystem Size Used Avail Use% Mounted on/dev/mapper/myvolg-myvol 2.0G 67M 1.9G 4% /myvol

[oracle@oraclelinux6 ~]$ mount | grep myvol/dev/mapper/myvolg-myvol on /myvol type ext4 (rw)

The option instructs to use “human readable” values for printing the file system size, used and available disk space. Now you can access-h dfthe file system and start using it for storing data! Try creating some directories or copying some files into the file system. In the example below, weuse some kernel source files to populate some of the logical volume's disk space.

Page 10: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo mkdir -v /myvol/srcmkdir: created directory `/myvol/src'[oracle@oraclelinux6 ~]$ sudo cp -a /usr/src/kernels/2.6.39-300* /myvol/src/[oracle@oraclelinux6 ~]$ df -h /myvol

Filesystem Size Used Avail Use% Mounted on/dev/mapper/myvolg-myvol 2.0G 145M 1.8G 8% /myvol

[oracle@oraclelinux6 ~]$ ls -l /myvol/src/

total 4drwxr-xr-x 22 root root 4096 Jan 8 15:35 2.6.39-300.17.2.el6uek.x86_64

Exercise: Mounting file systems at bootup time

Mounting a file manually like in the exercise above is not a persistent change. If you want to make sure that a file system is mounted at systembootup time, you need to add an entry to the file, a plain-text file that defines which file systems should be mounted. The fstab file/etc/fstablists one file system per line, the various fields are separated by white space (tabs or blanks). The basic format of an fstab entry looks as follows:

<device> <mount point> <file system type> <mount options> <dump option> <fs check order>

See the fstab(5) manual page for a more detailed description of these fields. Open in your preferred text editor as the root user (e.g./etc/fstabin or ) and add a new line for the file system we created to the end of the list:sudo gedit /etc/fstab sudo vi /etc/fstab

## /etc/fstab# Created by anaconda on Thu Jan 12 13:21:03 2012## Accessible filesystems, by reference, are maintained under '/dev/disk'# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info#/dev/mapper/vg_oraclelinux6-lv_root / btrfs defaults 1 1UUID=ed6b5002-07d3-4381-9057-47ee31704c78 /boot ext4 defaults 1 2/dev/mapper/vg_oraclelinux6-lv_swap swap swap defaults 0 0tmpfs /dev/shm tmpfs defaults 0 0devpts /dev/pts devpts gid=5,mode=620 0 0sysfs /sys sysfs defaults 0 0proc /proc proc defaults 0 0

/dev/myvolg/myvol /myvol ext4 defaults 0 0

Now the file system will be mounted automatically on the next reboot of your system. Try it out by rebooting your virtual machine at this point!

Exercise: Creating a snapshot volume

One of the nice features of LVM2 is the capability of creating an atomic and instant snapshot copy of a logical volume. This comes in handy if youwant to perform a consistent backup of a file system without having to bring down any services that might need to access these files. Snapshotscan be mounted at a different location and can even be modified.

Space requirements for a snapshot are very low – LVM2 only needs to keep a backup copy of blocks that have been changed since the snapshotwas created. You determine the size of this “backing store” when creating the snapshot volume. This also implies that you need to have freespace available in your volume group.

However, bear in mind that creating a snapshot comes with a performance penalty – these snapshots are not designed to stay around for a longtime and keeping multiple snapshots of the same volume significantly degrades the overall I/O performance. So LVM snapshots aren't as “cheap”as they are for file systems like Btrfs or Solaris' ZFS – it is strongly recommended to discard the LVM snapshot as soon as it has fulfilled its duty.

The basic command syntax looks like this:

lvcreate --size <size> --snapshot --name <snapshot name> <logical volume>

Page 11: virtualsysadminday-StoragemanagementwithLVM2

Example:

[oracle@oraclelinux6 ~]$ sudo lvcreate -v --size 500m --snapshot --name myvol-snapshotmyvolg/myvol

Setting logging type to disk Setting chunksize to 8 sectors. Finding volume group "myvolg" Archiving volume group "myvolg" metadata (seqno 2). Creating logical volume myvol-snapshot Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 3). Found volume group "myvolg" activation/volume_list configuration setting not defined: Checking only host tags formyvolg/myvol-snapshot Creating myvolg-myvol--snapshot Loading myvolg-myvol--snapshot table (252:3) Resuming myvolg-myvol--snapshot (252:3) Clearing start of logical volume "myvol-snapshot" Creating logical volume snapshot0 Found volume group "myvolg" Found volume group "myvolg" Executing: /sbin/modprobe dm-snapshot Creating myvolg-myvol-real Loading myvolg-myvol-real table (252:4) Loading myvolg-myvol table (252:0) Creating myvolg-myvol--snapshot-cow Loading myvolg-myvol--snapshot-cow table (252:5) Resuming myvolg-myvol--snapshot-cow (252:5) Loading myvolg-myvol--snapshot table (252:3) Suspending myvolg-myvol (252:0) with filesystem sync with device flush Suspending myvolg-myvol-real (252:4) with filesystem sync with device flush Found volume group "myvolg" Loading myvolg-myvol--snapshot-cow table (252:5) Suppressed myvolg-myvol--snapshot-cow (252:5) identical table reload. Resuming myvolg-myvol-real (252:4) Resuming myvolg-myvol--snapshot (252:3) Resuming myvolg-myvol (252:0) Monitoring myvolg/snapshot0 Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 4). Logical volume "myvol-snapshot" created

We can now go ahead and mount this snapshot like any other volume:

[oracle@oraclelinux6 ~]$ sudo mount -v -t ext4 /dev/myvolg/myvol-snapshot /mnt/dev/mapper/myvolg-myvol--snapshot on /mnt type ext4 (rw)[oracle@oraclelinux6 ~]$ ls -l /mnt/src/

total 4drwxr-xr-x 22 root root 4096 Jan 8 15:35 2.6.39-300.17.2.el6uek.x86_64

As you can see, the snapshot contains the exact same content as the volume it has been taken from. Removing files from the original volumedoes not change the snapshot's content:

[oracle@oraclelinux6 ~]$ sudo rm -rf /myvol/src/2.6.39-300.17.2.el6uek.x86_64[oracle@oraclelinux6 ~]$ ls -l /myvol/src/total 0[oracle@oraclelinux6 ~]$ ls -l /mnt/src/

total 4drwxr-xr-x 22 root root 4096 Jan 8 15:35 2.6.39-300.17.2.el6uek.x86_64

A snapshot is not just an identical read-only copy of a volume, it can be modified as well:

Page 12: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo touch /mnt/testfile[oracle@oraclelinux6 ~]$ ls -l /mnt/testfile

- 1 root root 0 Jan 9 01:33 /mnt/testfile-rw-r-r[oracle@oraclelinux6 ~]$ ls -l /myvol/

total 20drwx------ 2 root root 16384 Jan 9 01:14 lost+founddrwxr-xr-x 2 root root 4096 Jan 9 01:32 src

Note that it's not possible to “promote” a snapshot volume into becoming a replacement for the original volume. Deleting the underlying volumeautomatically erases all related snapshots as well. Also, creating snapshots of snapshots is not supported yet – LVM2 is still evolving.

To remove an LVM snapshot, use the command, which can also be used to remove regular logical volumes:lvremove

lvremove <volume group>/<logical volume>

To remove all logical volumes from a volume group, just provide the volume group name without listing any particular logical volume.

Example:

[oracle@oraclelinux6 ~]$ sudo umount -v /mnt/dev/mapper/myvolg-myvol--snapshot umounted[oracle@oraclelinux6 ~]$ sudo lvremove -v myvolg/myvol-snapshotUsing logical volume(s) on command lineDo you really want to remove active logical volume myvol-snapshot? [y/n]: y

Archiving volume group "myvolg" metadata (seqno 4). Removing snapshot myvol-snapshot Found volume group "myvolg" Found volume group "myvolg" Loading myvolg-myvol table (252:0) Loading myvolg-myvol--snapshot table (252:3) Not monitoring myvolg/snapshot0 Suspending myvolg-myvol (252:0) with device flush Suspending myvolg-myvol--snapshot (252:3) with device flush Suspending myvolg-myvol-real (252:4) with device flush Suspending myvolg-myvol--snapshot-cow (252:5) with device flush Found volume group "myvolg" Resuming myvolg-myvol--snapshot-cow (252:5) Resuming myvolg-myvol-real (252:4) Resuming myvolg-myvol--snapshot (252:3) Removing myvolg-myvol--snapshot-cow (252:5) Found volume group "myvolg" Resuming myvolg-myvol (252:0) Removing myvolg-myvol-real (252:4) Found volume group "myvolg" Removing myvolg-myvol--snapshot (252:3) Releasing logical volume "myvol-snapshot" Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 6). Logical volume "myvol-snapshot" successfully removed

Exercise: Increasing logical volumes and file systems

One of the nice features of LVM2 is the capability to dynamically resize logical volumes. This can be done without interfering with any of the othervolumes inside the same volume group, as long as there are free extents available. By shrinking other logical volumes which might not need thatmuch disk space, you can reclaim free space and allocate it to another volume instead.

You can use the utility to increase the logical volume and grow the file system on top of it in one step. invokes the lvextend lvextend fsadmutility in the background, to grow or shrink the file system itself. Alternatively, you can perform these steps separately, in case you're using a filesystem other than the ones supported by (currently ext2/3/4, ReiserFS and XFS).fsadm

Some file systems may require this operation to be performed while the file system is unmounted (especially shrinking a file system), others canperform this operation “on the fly”. For example, Ext4 or XFS on a Linux 2.6 kernel supports on-line resize, so you don't need to unmount anddisrupt ongoing system activity when you need to provide more disk space.

In the following example, we're increasing the size of our existing logical volume and file system by 500 MB in one step, using with the lvextend option (resize file system):-r

Page 13: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo lvs

LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert myvol myvolg -wi-ao-- 2.00g lv_root vg_oraclelinux6 -wi-ao-- 7.53g lv_swap vg_oraclelinux6 -wi-ao-- 1.97g

[oracle@oraclelinux6 ~]$ df -h /myvol/

Filesystem Size Used Avail Use% Mounted on/dev/mapper/myvolg-myvol 2.0G 122M 1.8G 7% /myvol

[oracle@oraclelinux6 ~]$ sudo lvextend -v -L +500M -r myvolg/myvol

Finding volume group myvolg Executing: fsadm --verbose check /dev/myvolg/myvolfsadm: "ext4" filesystem found on "/dev/mapper/myvolg-myvol"fsadm: Skipping filesystem check for device "/dev/mapper/myvolg-myvol" as the filesystem ismounted on /myvol fsadm failed: 3 Archiving volume group "myvolg" metadata (seqno 6). Extending logical volume myvol to 2.49 GiB Found volume group "myvolg" Found volume group "myvolg" Loading myvolg-myvol table (252:0) Suspending myvolg-myvol (252:0) with device flush Found volume group "myvolg" Resuming myvolg-myvol (252:0) Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 7). Logical volume myvol successfully resized Executing: fsadm --verbose resize /dev/myvolg/myvol 2609152Kfsadm: "ext4" filesystem found on "/dev/mapper/myvolg-myvol"fsadm: Device "/dev/mapper/myvolg-myvol" size is 2671771648 bytesfsadm: Parsing tune2fs -l "/dev/mapper/myvolg-myvol"fsadm: Resizing filesystem on device "/dev/mapper/myvolg-myvol" to 2671771648 bytes (524288 ->652288 blocks of 4096 bytes)fsadm: Executing resize2fs /dev/mapper/myvolg-myvol 652288resize2fs 1.41.12 (17-May-2010)Filesystem at /dev/mapper/myvolg-myvol is mounted on /myvol; on-line resizing requiredold desc_blocks = 1, new_desc_blocks = 1Performing an on-line resize of /dev/mapper/myvolg-myvol to 652288 (4k) blocks.The filesystem on /dev/mapper/myvolg-myvol is now 652288 blocks long.

[oracle@oraclelinux6 ~]$ df -h /myvol/

Filesystem Size Used Avail Use% Mounted on/dev/mapper/myvolg-myvol 2.5G 122M 2.3G 6% /myvol

[oracle@oraclelinux6 ~]$ sudo lvs

LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert myvol myvolg -wi-ao-- 2.49g lv_root vg_oraclelinux6 -wi-ao-- 7.53g lv_swap vg_oraclelinux6 -wi-ao-- 1.97g

Exercise: Adding a physical volume to a volume group

But what if we wanted to extend a logical volume's size to more than what the current volume group and underlying physical volumes canprovide? With LVM2, you can easily add additional physical volumes to an existing volume group on the fly, which then allows you to grow thelogical volumes across these new physical volumes.

This is performed by initializing a new physical volume by running “pvcreate” first (see the previous exercise "Preparing block devices for LVM2use" for details). Then you use the command to add the new physical volume to an existing volume group, followed by calling vgextend

to resize the logical volume and the file system on top of it:lvextend

[oracle@oraclelinux6 ~]$ sudo lvs

Page 14: virtualsysadminday-StoragemanagementwithLVM2

LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert myvol myvolg -wi-ao-- 2.49g lv_root vg_oraclelinux6 -wi-ao-- 7.53g lv_swap vg_oraclelinux6 -wi-ao-- 1.97g

[oracle@oraclelinux6 ~]$ sudo vgs

VG #PV #LV #SN Attr VSize VFree myvolg 1 1 0 wz--n- 4.00g 1.51g vg_oraclelinux6 2 2 0 wz--n- 9.50g 0

[oracle@oraclelinux6 ~]$ sudo pvs

PV VG Fmt Attr PSize PFree /dev/sda2 vg_oraclelinux6 lvm2 a-- 7.51g 0 /dev/sda3 vg_oraclelinux6 lvm2 a-- 1.99g 0 /dev/sdb1 myvolg lvm2 a-- 4.00g 1.51g /dev/sdc1 lvm2 a-- 4.00g 4.00g

[oracle@oraclelinux6 ~]$ sudo vgextend -v myvolg /dev/sdc1

Checking for volume group "myvolg" Archiving volume group "myvolg" metadata (seqno 7). Wiping cache of LVM-capable devices Adding physical volume '/dev/sdc1' to volume group 'myvolg' Volume group "myvolg" will be extended by 1 new physical volumes Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 8). Volume group "myvolg" successfully extended

[oracle@oraclelinux6 ~]$ sudo pvs

PV VG Fmt Attr PSize PFree /dev/sda2 vg_oraclelinux6 lvm2 a-- 7.51g 0 /dev/sda3 vg_oraclelinux6 lvm2 a-- 1.99g 0 /dev/sdb1 myvolg lvm2 a-- 4.00g 1.51g /dev/sdc1 myvolg lvm2 a-- 4.00g 4.00g

[oracle@oraclelinux6 ~]$ sudo vgs

VG #PV #LV #SN Attr VSize VFree myvolg 2 1 0 wz--n- 7.99g 5.50g vg_oraclelinux6 2 2 0 wz--n- 9.50g 0

[oracle@oraclelinux6 ~]$ sudo lvextend -v -L +5G -r myvolg/myvol

Page 15: virtualsysadminday-StoragemanagementwithLVM2

Finding volume group myvolg Executing: fsadm --verbose check /dev/myvolg/myvolfsadm: "ext4" filesystem found on "/dev/mapper/myvolg-myvol"fsadm: Skipping filesystem check for device "/dev/mapper/myvolg-myvol" as the filesystem ismounted on /myvol fsadm failed: 3 Archiving volume group "myvolg" metadata (seqno 8). Extending logical volume myvol to 7.49 GiB Found volume group "myvolg" Found volume group "myvolg" Loading myvolg-myvol table (252:0) Suspending myvolg-myvol (252:0) with device flush Found volume group "myvolg" Resuming myvolg-myvol (252:0) Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 9). Logical volume myvol successfully resized Executing: fsadm --verbose resize /dev/myvolg/myvol 7852032Kfsadm: "ext4" filesystem found on "/dev/mapper/myvolg-myvol"fsadm: Device "/dev/mapper/myvolg-myvol" size is 8040480768 bytesfsadm: Parsing tune2fs -l "/dev/mapper/myvolg-myvol"fsadm: Resizing filesystem on device "/dev/mapper/myvolg-myvol" to 8040480768 bytes (652288 ->1963008 blocks of 4096 bytes)fsadm: Executing resize2fs /dev/mapper/myvolg-myvol 1963008resize2fs 1.41.12 (17-May-2010)Filesystem at /dev/mapper/myvolg-myvol is mounted on /myvol; on-line resizing requiredold desc_blocks = 1, new_desc_blocks = 1Performing an on-line resize of /dev/mapper/myvolg-myvol to 1963008 (4k) blocks.The filesystem on /dev/mapper/myvolg-myvol is now 1963008 blocks long.

[oracle@oraclelinux6 ~]$ df -h /myvol

Filesystem Size Used Avail Use% Mounted on/dev/mapper/myvolg-myvol 7.4G 124M 6.9G 2% /myvol

Exercise: Removing/replacing a physical volume

Now let's consider that the first disk drive in our volume group is showing signs of failure (e.g. when running SMART checksmyvolg /dev/sdb1with the utility). Currently, the extents on this physical volume are completely allocated by the volume group – these wouldsmartctl myvolgneed to be moved to the other physical volume before we could remove the failing one, provided there is enough free space available. LVM2supports this operation by using the command:pvmove

pvmove <source PV> <destination PV>

The destination PV can be omitted; in this case LVM2 attempts to move all extents to any other available physical volume related to the affectedvolume group.

In our case, we have an existing logical volume in the volume group that spans two physical volumes (by allocating physical extents from both),so we currently would not be able to move it off the first disk. Fortunately the file system on that logical volume currently does not require thatmuch disk space, it can easily fit on the remaining working physical volume after shrinking it. As a first step, we therefore must reduce the filesystem and the logical volume, to free up enough allocated extents:

[oracle@oraclelinux6 ~]$ sudo resize2fs -M /dev/myvolg/myvol

resize2fs 1.41.12 (17-May-2010)Filesystem at /dev/myvolg/myvol is mounted on /myvol; on-line resizing requiredOn-line shrinking from 1963008 to 1476099 not supported.

Doh! While increasing an ext4 file system can be done on the fly, it needs to be unmounted and checked before we can shrink it:

Page 16: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ df -h /myvol

Filesystem Size Used Avail Use% Mounted on/dev/mapper/myvolg-myvol 7.4G 124M 6.9G 2% /myvol

[oracle@oraclelinux6 ~]$ sudo umount -v /dev/myvolg/myvol/dev/mapper/myvolg-myvol umounted[oracle@oraclelinux6 ~]$ sudo resize2fs -M /dev/myvolg/myvol

resize2fs 1.41.12 (17-May-2010)Please run 'e2fsck -f /dev/myvolg/myvol' first.

[oracle@oraclelinux6 ~]$ sudo e2fsck -f /dev/myvolg/myvol

e2fsck 1.41.12 (17-May-2010)Pass 1: Checking inodes, blocks, and sizesPass 2: Checking directory structurePass 3: Checking directory connectivityPass 4: Checking reference countsPass 5: Checking group summary information/dev/myvolg/myvol: 12094/491520 files (0.0% non-contiguous), 62351/1963008 blocks

[oracle@oraclelinux6 ~]$ sudo resize2fs -M /dev/myvolg/myvol

resize2fs 1.41.12 (17-May-2010)Resizing the filesystem on /dev/myvolg/myvol to 42680 (4k) blocks.The filesystem on /dev/myvolg/myvol is now 42680 blocks long.

[oracle@oraclelinux6 ~]$ sudo mount -v /myvol//dev/mapper/myvolg-myvol on /myvol type ext4 (rw)[oracle@oraclelinux6 ~]$ df -h /myvol

Filesystem Size Used Avail Use% Mounted on/dev/mapper/myvolg-myvol 163M 120M 35M 78% /myvol

As you can see, the file system has now been reduced in size significantly. The option instructs the resizing tool to shrink the file system to the-Mabsolute minimum.

The same result could also be achieved by using the utility instead, which performs the checking, unmounting and resizing of a given filefsadmsystem automatically and supports the ext2/3/4 file systems as well as ReiserFS and XFS (two other popular journaling file systems for Linux).However, it does not support the option of shrinking a file system to it's minimum possible size, you need to provide an absolute size manually.

[oracle@oraclelinux6 ~]$ sudo fsadm -y resize /dev/myvolg/myvol 200M

resize2fs 1.41.12 (17-May-2010)Filesystem at /dev/mapper/myvolg-myvol is mounted on /myvol; on-line resizing requiredold desc_blocks = 1, new_desc_blocks = 1Performing an on-line resize of /dev/mapper/myvolg-myvol to 51200 (4k) blocks.The filesystem on /dev/mapper/myvolg-myvol is now 51200 blocks long.

[oracle@oraclelinux6 ~]$ df -h /myvol

Filesystem Size Used Avail Use% Mounted on/dev/mapper/myvolg-myvol 196M 120M 69M 64% /myvol

Now let's reduce the size of the logical volume underneath it. We'll choose to be on the safe side and reduce it from 7.5 GB to 200M, so we don'taccidentally damage the file system:

Page 17: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo lvs

LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert myvol myvolg -wi-ao-- 7.49g lv_root vg_oraclelinux6 -wi-ao-- 7.53g lv_swap vg_oraclelinux6 -wi-ao-- 1.97g

[oracle@oraclelinux6 ~]$ sudo lvreduce -v -L 200M myvolg/myvol

Finding volume group myvolg WARNING: Reducing active and open logical volume to 200.00 MiB THIS MAY DESTROY YOUR DATA (filesystem etc.)

Do you really want to reduce myvol? [y/n]: y

Archiving volume group "myvolg" metadata (seqno 9). Reducing logical volume myvol to 200.00 MiB Found volume group "myvolg" Found volume group "myvolg" Loading myvolg-myvol table (252:2) Suspending myvolg-myvol (252:2) with device flush Found volume group "myvolg" Resuming myvolg-myvol (252:2) Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 10). Logical volume myvol successfully resized

[oracle@oraclelinux6 ~]$ sudo lvs

LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert myvol myvolg -wi-ao-- 200.00m lv_root vg_oraclelinux6 -wi-ao-- 7.53g lv_swap vg_oraclelinux6 -wi-ao-- 1.97g

[oracle@oraclelinux6 ~]$ df -h /myvol

Filesystem Size Used Avail Use% Mounted on/dev/mapper/myvolg-myvol 196M 120M 69M 64% /myvol

The logical volume has now been reduced in size, so it does allocate much less extents from the volume group.

Alternatively, can take care of reducing the file system on top of it automatically, by invoking by itself. This combines several oflvreduce fsadmthe steps above into a single call:

Page 18: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo lvreduce -v -L 150M -r myvolg/myvol

Finding volume group myvolg Rounding size to boundary between physical extents: 152.00 MiB Executing: fsadm --verbose check /dev/myvolg/myvolfsadm: "ext4" filesystem found on "/dev/mapper/myvolg-myvol"fsadm: Skipping filesystem check for device "/dev/mapper/myvolg-myvol" as the filesystem ismounted on /myvol fsadm failed: 3 Executing: fsadm --verbose resize /dev/myvolg/myvol 155648Kfsadm: "ext4" filesystem found on "/dev/mapper/myvolg-myvol"fsadm: Device "/dev/mapper/myvolg-myvol" size is 209715200 bytesfsadm: Parsing tune2fs -l "/dev/mapper/myvolg-myvol"fsadm: resize2fs needs unmounted filesystem

Do you want to unmount "/myvol"? [Y|n] y

fsadm: Executing umount /myvolfsadm: Executing fsck -f -p /dev/mapper/myvolg-myvolfsck from util-linux-ng 2.17.2/dev/mapper/myvolg-myvol: 12094/16384 files (0.0% non-contiguous), 31636/51200 blocksfsadm: Resizing filesystem on device "/dev/mapper/myvolg-myvol" to 159383552 bytes (51200 ->38912 blocks of 4096 bytes)fsadm: Executing resize2fs /dev/mapper/myvolg-myvol 38912resize2fs 1.41.12 (17-May-2010)Resizing the filesystem on /dev/mapper/myvolg-myvol to 38912 (4k) blocks.The filesystem on /dev/mapper/myvolg-myvol is now 38912 blocks long.

fsadm: Remounting unmounted filesystem backfsadm: Executing mount /dev/mapper/myvolg-myvol /myvol Archiving volume group "myvolg" metadata (seqno 10). Reducing logical volume myvol to 152.00 MiB Found volume group "myvolg" Found volume group "myvolg" Loading myvolg-myvol table (252:2) Suspending myvolg-myvol (252:2) with device flush Found volume group "myvolg" Resuming myvolg-myvol (252:2) Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 11). Logical volume myvol successfully resized

[oracle@oraclelinux6 ~]$ sudo lvs

LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert myvol myvolg -wi-ao-- 152.00m lv_root vg_oraclelinux6 -wi-ao-- 7.53g lv_swap vg_oraclelinux6 -wi-ao-- 1.97g

[oracle@oraclelinux6 ~]$ df -h /myvol/

Filesystem Size Used Avail Use% Mounted on/dev/mapper/myvolg-myvol 148M 120M 23M 85 /myvol

Now we can proceed with moving the allocated physical extents from the failing disk:

Page 19: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo pvmove -v /dev/sdb1

Finding volume group "myvolg" Archiving volume group "myvolg" metadata (seqno 10). Creating logical volume pvmove0 Moving 38 extents of logical volume myvolg/myvol Found volume group "myvolg" activation/volume_list configuration setting not defined: Checking only host tags formyvolg/myvol Updating volume group metadata Found volume group "myvolg" Found volume group "myvolg" Creating myvolg-pvmove0 Loading myvolg-pvmove0 table (252:3) Loading myvolg-myvol table (252:0) Suspending myvolg-myvol (252:0) with device flush Suspending myvolg-pvmove0 (252:3) with device flush Found volume group "myvolg" activation/volume_list configuration setting not defined: Checking only host tags formyvolg/pvmove0 Resuming myvolg-pvmove0 (252:3) Found volume group "myvolg" Loading myvolg-pvmove0 table (252:3) Suppressed myvolg-pvmove0 (252:3) identical table reload. Resuming myvolg-myvol (252:0) Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 11). Checking progress before waiting every 15 seconds /dev/sdb1: Moved: 0.0% /dev/sdb1: Moved: 100.0% Found volume group "myvolg" Found volume group "myvolg" Loading myvolg-myvol table (252:0) Loading myvolg-pvmove0 table (252:3) Suspending myvolg-myvol (252:0) with device flush Suspending myvolg-pvmove0 (252:3) with device flush Found volume group "myvolg" Resuming myvolg-pvmove0 (252:3) Found volume group "myvolg" Resuming myvolg-myvol (252:0) Found volume group "myvolg" Removing myvolg-pvmove0 (252:3) Removing temporary pvmove LV Writing out final volume group after pvmove Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 13).

[oracle@oraclelinux6 ~]$ sudo vgreduce -v myvolg /dev/sdb1

Finding volume group "myvolg" Using physical volume(s) on command line Archiving volume group "myvolg" metadata (seqno 13). Removing "/dev/sdb1" from volume group "myvolg" Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 14). Removed "/dev/sdb1" from volume group "myvolg"

[oracle@oraclelinux6 ~]$ sudo vgs

VG #PV #LV #SN Attr VSize VFree myvolg 1 1 0 wz--n- 4.00g 3.85g vg_oraclelinux6 2 2 0 wz--n- 9.50g 0

[oracle@oraclelinux6 ~]$ sudo pvs

PV VG Fmt Attr PSize PFree /dev/sda2 vg_oraclelinux6 lvm2 a-- 7.51g 0 /dev/sda3 vg_oraclelinux6 lvm2 a-- 1.99g 0 /dev/sdb1 lvm2 a-- 4.00g 4.00g /dev/sdc1 myvolg lvm2 a-- 4.00g 3.85g

[oracle@oraclelinux6 ~]$ df -h /myvol

Filesystem Size Used Avail Use% Mounted on/dev/mapper/myvolg-myvol 148M 120M 23M 85% /myvol

Page 20: virtualsysadminday-StoragemanagementwithLVM2

The failing physical volume has now been removed from the volume group and can be replaced. The file system in logical volume is stillmyvolavailable and could even be increased in size to make use of the remaining available space in the volume group.

For the sake of time, we skip the step of actually removing and re-adding the virtual disk drive from the virtual machine, let's just assume youreplaced the failing disk drive with a new one.

Once the replacement disk is in place, you can partition it and use as outlined in an earlier exercise to make it available to LVM2pvcreateagain.

Now you can add the physical volume to the volume group again:

[oracle@oraclelinux6 ~]$ sudo vgextend -v myvolg /dev/sdb1

Checking for volume group "myvolg" Archiving volume group "myvolg" metadata (seqno 14). Wiping cache of LVM-capable devices Adding physical volume '/dev/sdb1' to volume group 'myvolg' Volume group "myvolg" will be extended by 1 new physical volumes Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 15). Volume group "myvolg" successfully extended

[oracle@oraclelinux6 ~]$ sudo vgs

VG #PV #LV #SN Attr VSize VFree myvolg 2 1 0 wz--n- 7.99g 7.84g vg_oraclelinux6 2 2 0 wz--n- 9.50g 0

Exercise: Setting up a RAID1 device using mdadm

As you can see from the example above, LVM2 is capable of managing multiple physical volumes like hard disks. It also supports capabilities ofmirroring and striping of logical volumes, to provide some redundancy and to increase performance. However, it is not a replacement for a RAIDsystem; this is better handled by either utilizing a hardware RAID array with a dedicated controller or by using the Linux kernel's built-insoftware-RAID functionality. You would then create logical volumes on top of these RAID block devices.

In the following exercise, we'll create a mirrored (RAID1) device using the utility and the kernel's multiple device driver, consisting of twomdadmdisk drives. This provides some basic redundancy – one of the disk drives can fail without data loss. We'll then add a file system on top of theRAID device directly. In a production environment, you would probably use LVM2 on top of it instead, to stay more flexible.

First we need to revert the LVM2 configuration from the previous exercises. You can achieve that by either restarting the virtual machine from theprevious snapshot and repartition the virtual disks, or by entering the following commands:

Page 21: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo umount -v /myvol/dev/mapper/myvolg-myvol umounted[oracle@oraclelinux6 ~]$ sudo lvremove myvolg/myvolDo you really want to remove active logical volume myvol? [y/n]: yLogical volume "myvol" successfully removed[oracle@oraclelinux6 ~]$ sudo vgremove myvolgVolume group "myvolg" successfully removed[oracle@oraclelinux6 ~]$ sudo pvremove /dev/sdb1 /dev/sdc1Labels on physical volume "/dev/sdb1" successfully wipedLabels on physical volume "/dev/sdc1" successfully wiped[oracle@oraclelinux6 ~]$ sudo fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').

Command (m for help): tSelected partition 1Hex code (type L to list codes): 83Changed system type of partition 1 to 83 (Linux)

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.Syncing disks.

[oracle@oraclelinux6 ~]$ sudo fdisk /dev/sdc

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').

Command (m for help): tSelected partition 1Hex code (type L to list codes): 83Changed system type of partition 1 to 83 (Linux)

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.Syncing disks.

Also don't forget to remove the mount point ( ) and taking out the mount point entry in !rmdir /myvol /etc/fstab

Now that we have two clean disk drives for testing, let's start with create a mirrored set out of them. This is done using the “mdadm” utility, whichis used for building, managing and monitoring Linux MD devices. Check the mdadm(8) manual page for a detailed description of its features andoptions.

Page 22: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2/dev/sdb1 /dev/sdc1

mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90mdadm: size set to 4191897K

Continue creating array? y

mdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md0 started.

[oracle@oraclelinux6 ~]$ cat /proc/mdstat

Personalities : [raid1]md0 : active raid1 sdc1[1] sdb1[0] 4191897 blocks super 1.2 [2/2] [UU] [=>...................] resync = 9.5% (401088/4191897) finish=0.9min speed=66848K/sec

unused devices: <none>

The file is a useful resource to quickly check the status of your MD RAID devices. In the example above, MD was busy initializing/proc/mdstatthe RAID1 device. After this initialization phase, the status should look as follows:

[oracle@oraclelinux6 ~]$ cat /proc/mdstat

Personalities : [raid1]md0 : active raid1 sdc1[1] sdb1[0] 4191897 blocks super 1.2 [2/2] [UU]

unused devices: <none>

You can use the tool to get some more detailed information about the currently configured device:mdadm

[oracle@oraclelinux6 ~]$ sudo mdadm --query /dev/md0/dev/md0: 3.100GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail.[oracle@oraclelinux6 ~]$ sudo mdadm --detail /dev/md0

/dev/md0: Version : 1.2 Creation Time : Wed Jan 9 02:49:09 2013 Raid Level : raid1 Array Size : 4191897 (4.00 GiB 4.29 GB) Used Dev Size : 4191897 (4.00 GiB 4.29 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent

Update Time : Wed Jan 9 02:49:30 2013 State : clean Active Devices : 2Working Devices : 2 Failed Devices : 0 Spare Devices : 0

Name : oraclelinux6.localdomain:0 (local to host oraclelinux6.localdomain) UUID : 78e6f947:bbbcf414:d6916aae:37a1a21f Events : 17

Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1

Now that we have a block device, we can create a file system on top of it and put some data into it. Note that we could use LVM on top of thisRAID set, too, but we're sticking to a plain file system on top of the RAID for simplicity.

Page 23: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo mkfs.ext4 /dev/md0

mke2fs 1.41.12 (17-May-2010)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blockscp262144 inodes, 1047974 blocks52398 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=107374182432 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: doneCreating journal (16384 blocks): doneWriting superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 32 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.

[oracle@oraclelinux6 ~]$ sudo mkdir -v /raidmkdir: created directory `/raid'[oracle@oraclelinux6 ~]$ sudo mount -v -t ext4 /dev/md0 /raid/dev/md0 on /raid type ext4 (rw)[oracle@oraclelinux6 ~]$ df -h /raid

Filesystem Size Used Avail Use% Mounted on/dev/md0 4.0G 72M 3.7G 2% /raid

As you can see, the file system has a capacity of 4GB, which resembles the size of one disk drive. The data is being mirrored to the second onetransparently, in the background.

It's useful to store the raid configuration information in a configuration file named , this will help mdadm to assemble existing/etc/mdadm.confarrays at system bootup. You can either copy and adapt the sample configuration file from

, or create a very minimalistic one from scratch, using your text editor of choice. Our/usr/share/doc/mdadm-3.2.1/mdadm.conf-exampleexample looks as follows:

[oracle@oraclelinux6 ~]$ cat /etc/mdadm.confDEVICE /dev/sdb1 /dev/sdc1ARRAY /dev/md0 devices=/dev/sdb1,/dev/sdc1[oracle@oraclelinux6 ~]$ sudo mkinitrd --force /boot/initramfs-`uname -r`.img `uname -r`

See the mdadm.conf(5) manual page for more details about the format of this file. Now that this configuration file exists, it needs to be added tothe initial ramdisk so the RAID array will be properly detected and initialized upon system reboot (see

for more details on why this is necessary).https://bugzilla.redhat.com/show_bug.cgi?id=606481

Now let's copy some files on the device, so we have some data for testing:

[oracle@oraclelinux6 ~]$ sudo cp -a /usr/src/kernels/2.6.39-300* /raid[oracle@oraclelinux6 ~]$ ls -l /raid

total 20drwxr-xr-x 22 root root 4096 Jan 8 15:35 2.6.39-300.17.2.el6uek.x86_64drwx------ 2 root root 16384 Jan 9 02:53 lost+found

[oracle@oraclelinux6 ~]$ df -h /raid

Filesystem Size Used Avail Use% Mounted on/dev/md0 4.0G 150M 3.6G 4% /raid

So far, our file system behaves like any other file system. Let's provoke a complete disk failure of one of the disk drives, so we can observe howthe MD driver handles this situation.

Page 24: virtualsysadminday-StoragemanagementwithLVM2

In VirtualBox, you can only make changes to the storage configuration when the VM has been powered off. So we need to shut down the VM first,either by running the following command on the command line or by selecting -> -> from the virtual machine'sSystem Shut Down... Shut Downdesktop menu:

[oracle@oraclelinux6 ~]$ sudo poweroff

Now we can detach one of the virtual disk drives from the system and reboot. Click on the VM's settings icon and select the “Storage” section.

Now right-click on the Disk2.vdi icon and select “Remove attachment”.

This will detach the disk drive from this virtual machine, to simulate a total failure of the entire disk drive.

Now let's restart the VM and figure out how MD copes with the missing disk drive. After the system has booted up, log in as the “oracle” useragain and open a Terminal.

Let's take a look at the status of our RAID device:

[oracle@oraclelinux6 ~]$ cat /proc/mdstat

Personalities : [raid1]md0 : active (auto-read-only) raid1 sdb1[0] 4191897 blocks super 1.2 [2/1] [U_]

unused devices: <none>

The part indicates that only one of two devices is active, but you need have a trained eye to discover this. It's better to look at the output[U_]from , which is a bit more clear about the degraded state of the device:mdadm

Page 25: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo mdadm --detail /dev/md0

/dev/md0: Version : 1.2 Creation Time : Wed Jan 9 02:49:09 2013 Raid Level : raid1 Array Size : 4191897 (4.00 GiB 4.29 GB) Used Dev Size : 4191897 (4.00 GiB 4.29 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent

Update Time : Wed Jan 9 02:59:43 2013 State : clean, degraded Active Devices : 1Working Devices : 1 Failed Devices : 0 Spare Devices : 0

Name : oraclelinux6.localdomain:0 (local to host oraclelinux6.localdomain) UUID : 78e6f947:bbbcf414:d6916aae:37a1a21f Events : 17

Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed

Even though the RAID is degraded, our file system is still available:

[oracle@oraclelinux6 ~]$ sudo mount -v -t ext4 /dev/md0 /raid/dev/md0 on /raid type ext4 (rw)[oracle@oraclelinux6 ~]$ ls -l /raid

total 20drwxr-xr-x 22 root root 4096 Jan 8 15:35 2.6.39-300.17.3.el6uek.x86_64drwx------ 2 root root 16384 Jan 9 02:53 lost+found

However, it's a good idea to replace the failed disk drive as soon as possible. In our case, we can simply shut down the VM, re-attach the diskimage and reboot. In a live production system, you are likely able to hot-swap the disk drive on the fly without any downtime. supportsmdadmthese kind of operations as well (disabling and replacing devices on the fly, rebuilding RAID arrays), but this is out of the scope of this lab session.This exercise only scratched on the surface on what MD is capable of.

Exercise: Setting up Encryption using dm_crypt and LUKS

The Linux device mapper also supports the creation of encrypted block devices using the dm_crypt device driver, which provides strongprotection against data theft in case of physical loss of the hardware. Data on these devices can only be accessed if the appropriate password isprovided at system bootup time. Because the encryption takes place on the underlying block device, it is file system and application agnostic –any file system (or an LVM2 setup) can be used on top of an encrypted device, even swap space.

LUKS, Linux Unified Key Setup, is a standard for hard disk encryption. It standardizes a partition header, as well as the format of the bulk data.LUKS can manage multiple passwords, that can be revoked effectively and that are protected against dictionary attacks.

We'll re-use the existing disk drive from our previous exercises for this, so we first have to remove the current RAID configuration/dev/sdb1manually (or reboot from a previous snapshot instead).

The following commands will unmount the file system, stop the RAID device and ensure that it's no longer recognized as a RAID volume:

[oracle@oraclelinux6 ~]$ sudo rm -v /etc/mdadm.confremoved `/etc/mdadm.conf'[oracle@oraclelinux6 ~]$ sudo umount -v /raid/dev/md0 umounted[oracle@oraclelinux6 ~]$ sudo mdadm --stop /dev/md0mdadm: stopped /dev/md0[oracle@oraclelinux6 ~]$ sudo mdadm --zero-superblock /dev/sdb1

Now we have an empty device that we can use to store the encrypted volume. This is done using the utility.cryptsetup

Page 26: virtualsysadminday-StoragemanagementwithLVM2

The first command initializes the volume, and sets an initial key. The option ask for the passphrase twice, making sure your password is typed-yin correctly. The second command opens the partition, and creates the device mapping (in this case ). This is the actual/dev/mapper/cryptfsdevice that will be used to create a file system on top of it – don't use the “real” physical device ( ) for this!/dev/sdb1

[oracle@oraclelinux6 ~]$ sudo cryptsetup -y luksFormat /dev/sdb1

WARNING!========This will overwrite data on /dev/sdb1 irrevocably.

Are you sure? (Type uppercase yes): YESEnter LUKS passphrase: <passphrase>Verify passphrase: <passphrase>[oracle@oraclelinux6 ~]$ sudo cryptsetup luksOpen /dev/sdb1 cryptfsEnter passphrase for /dev/sdb1: <passphrase>

Now let's check the status of our encrypted volume:

[oracle@oraclelinux6 ~]$ sudo cryptsetup status cryptfs/dev/mapper/cryptfs is active. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/sdb1 offset: 4096 sectors size: 8381771 sectors mode: read/write

As an additional safety measure, we could now write zeros to the new encrypted device. This will force the allocation of data blocks.optionalBecause the zeros are encrypted, this will look like random data to the outside world, making it nearly impossible to track down encrypted datablocks if someone gains access to the hard disk that contains the encrypted file system. We'll skip this step, as it takes quite some time.

dd if=/dev/zero of=/dev/mapper/cryptfs

Now that we have initialized our encrypted volume, we need to create a filesystem and mount point:

Page 27: virtualsysadminday-StoragemanagementwithLVM2

[oracle@oraclelinux6 ~]$ sudo mkfs.ext4 /dev/mapper/cryptfs{noformat:nopanel=true}mke2fs 1.41.12 (17-May-2010)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks262144 inodes, 1047721 blocks52386 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=107374182432 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: doneCreating journal (16384 blocks): doneWriting superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 22 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[oracle@oraclelinux6 ~]$ sudo mkdir -v /cryptfsmkdir: created directory `/cryptfs'[oracle@oraclelinux6 ~]$ sudo mount -v /dev/mapper/cryptfs /cryptfsmount: you didn't specify a filesystem type for /dev/mapper/cryptfs I will try type ext4/dev/mapper/cryptfs on /cryptfs type ext4 (rw)[oracle@oraclelinux6 ~]$ df -h /cryptfs/Filesystem Size Used Avail Use% Mounted on/dev/mapper/cryptfs 4.0G 72M 3.7G 2% /cryptfs[oracle@oraclelinux6 ~]$ ls -l /cryptfs/total 16drwx------ 2 root root 16384 Jan 9 13:16 lost+found

You can now use this file system like any other. The encryption of the data blocks is done in a fully transparent fashion, unnoticed by the filesystem or application accessing this data.

As a final step, we need to ensure that the encrypted file system is properly set up and mounted at system bootup time. For this to happen, weneed to create an appropriate entry in the configuration file , using our favorite text editor:/etc/crypttab

[oracle@oraclelinux6 ~]$ cat /etc/crypttab# <target name> <source device> <key file> <options>cryptfs /dev/sdb1 none luks

Additionally, we need to add the file system to for the actual mounting to take place, by adding a line as the following one:/etc/fstab

[oracle@oraclelinux6 ~]$ tail -1 /etc/fstab/dev/mapper/cryptfs /cryptfs ext4 defaults 0 0

If you reboot your system now, you will be prompted to enter your passphrase to continue the boot process:

Password for /dev/sdb1 (luks-a7e...):**********

After entering the correct passphrase, the system continues to boot and the file system will be mounted at the given location:

[oracle@oraclelinux6 ~]$ df -h /cryptfs/Filesystem Size Used Avail Use% Mounted on/dev/mapper/cryptfs 4.0G 72M 3.7G 2% /cryptfs

Page 28: virtualsysadminday-StoragemanagementwithLVM2

Now any files that you store in will be protected by the strong encryption of dm_crypt. This also means that your passphrase is an/cryptfsinvaluable asset – if you lose it, you won't be able to access your data anymore! However, using LUKS it's actually possible to create multiplekeys to unlock the volume – this can be handy to provide a “recovery key” or allowing multiple individuals to access the volume without sharingthe same password. To add a key, use the following command:

[oracle@oraclelinux6 ~]$ sudo cryptsetup luksAddKey /dev/sdb1Enter any passphrase: <existing passphrase>Enter new passphrase for key slot: <new passphrase>Verify passphrase: <new passphrase>

Now you can unlock the volume by either providing the original or the new passphrase.