fusion drive

6
Articles KB364 - Loading the Driver via udev or Init Script for md and LVM There are two main methods for loading the Fusion-io Driver: udev and the iodrive script. The default method of loading is udev. Using udev to Load the Driver Using Init Scripts to Load the Driver o Using Init Scripts to Load the VSL Driver (2.1.x to 3.x) o Using Init Scripts to Load the 1.2.x Driver Using udev to Load the Driver Most modern Linux distributions use udev to facilitate driver loading. Usually it just works, behind the scenes, loading drivers. udev automatically loads drivers for installed hardware on the system. It does this by walking through the devices on the PCI bus, and then loading any driver that has been properly configured to work with that device. This way, drivers can be loaded without having to use scripts or modify configuration files. The Fusion-io drivers are properly configured, and udev will find the drivers and attempt to load them; however, there are some cases where loading by udev is not appropriate, or can have issues. udev will wait 180 seconds for the driver to load, then it will exit. In most cases, this is plenty of time, even with multiple ioMemory VSL drivers installed. But if the drives were shut down improperly, loading the driver and attaching the drives takes longer than the 180 seconds. In this case,udev will exit. The driver will not exit, but will continue working on attaching the drives. There is not always a problem when udev exits early. The drivers will eventually load, and then you will be able to use the attached block devices, but if the drivers do take too long to load, and udev does exit, and file systems are set to be mounted in the fstab, then the system file system check (fsck) will fail, and the system will stop booting. In most distributions the user will drop into a single-user mode, or repair mode. Again, this is normal behavior; after the driver finishes re-scanning and attaching the drive, a reboot will fix things. For most users, this will not happen often enough to be an issue, but for installations with many devices, or for server installations where dropping into single-user mode is

Upload: rvr86

Post on 11-Jan-2016

7 views

Category:

Documents


0 download

DESCRIPTION

fustion drive mount issue

TRANSCRIPT

Page 1: Fusion Drive

Articles

KB364 - Loading the Driver via udev or Init Script for md and LVMThere are two main methods for loading the Fusion-io Driver: udev and the iodrive script. The default method of loading is udev.

Using udev to Load the Driver

Using Init Scripts to Load the Drivero Using Init Scripts to Load the VSL Driver (2.1.x to 3.x)

o Using Init Scripts to Load the 1.2.x Driver

Using udev to Load the DriverMost modern Linux distributions use udev to facilitate driver loading. Usually it just works, behind the scenes, loading drivers. udev automatically loads drivers for installed hardware on the system. It does this by walking through the devices on the PCI bus, and then loading any driver that has been properly configured to work with that device. This way, drivers can be loaded without having to use scripts or modify configuration files. The Fusion-io drivers are properly configured, and udev will find the drivers and attempt to load them; however, there are some cases where loading by udev is not appropriate, or can have issues.

udev will wait 180 seconds for the driver to load, then it will exit. In most cases, this is plenty of time, even with multiple ioMemory VSL drivers installed. But if the drives were shut down improperly, loading the driver and attaching the drives takes longer than the 180 seconds. In this case,udev will exit. The driver will not exit, but will continue working on attaching the drives. There is not always a problem when udev exits early. The drivers will eventually load, and then you will be able to use the attached

block devices, but if the drivers do take too long to load, and udev does exit, and file systems are set to be mounted in the fstab, then the system file system check (fsck) will fail, and the system will stop booting. In most distributions the user will drop into a single-user mode, or repair mode.

Again, this is normal behavior; after the driver finishes re-scanning and attaching the drive, a reboot will fix things. For most users, this will not happen often enough to be an issue, but for installations with many devices, or for server installations where dropping into single-user mode is unacceptable, there is an alternative method for driver loading that does not have these issues.

Using Init Scripts to Load the DriverThe ioMemory VSL packages provide init scripts that are installed on all supported Linux distributions. These scripts typically reside in/etc/init.d/iomemory-vsl, /etc/rc.d/iomemory-vsl, or /etc/init.d/iomemory-vsl. The scripts are used to load and start the driver, and to mount filesystems, after the system is up. This method completely avoids the udev behavior described above. (It will wait as long as it takes for drives to be attached).

These steps assume that the logical volumes /dev/md0 (md) and /dev/vg0/fio_lv (LVM) have been setup beforehand. There are other

Page 2: Fusion Drive

knowledge base articles that provide details for logical volume creation, including updating lvm.conf to work with ioMemory VSL drivers.

Using Init Scripts to Load the VSL Driver (2.1.x to 3.x)To load the VSL driver (2.1.x and 3.x), follow the steps listed in the table below.

1. Edit /etc/modprobe.d/iomemory-vsl.conf, and uncomment the blacklist line:

1. Before:2. # To keep ioDrive from auto loading at boot, uncomment below

# blacklist iomemory-vsl

3. After:4. # To keep ioMemory VSL from auto loading at boot, uncomment

below

blacklist iomemory-vsl

This keeps udev from automatically loading the driver

2. Edit /etc/fstab and add noauto to the options in the appropriate line. This will keep the

OS from trying to check the drive for errors on boot.

1. Before:2. ...

3. /dev/md0 /iomemory-vsl_mountpoint ext3 defaults 1 2

/dev/vg0/fio_lv /iodrive_mountpoint2 ext3 defaults 1 2

4. After:5. ...

6. /dev/md0 /iomemory-vsl_mountpoint ext3 defaults,noauto 0 0

/dev/vg0/fio_lv /iomemory-vsl_mountpoint2 ext3 defaults,noauto 0 0

3. Edit /etc/sysconfig/iomemory-vsl, and uncomment ENABLED=1 to enable the init script.

1. Before:2. # If ENABLED is not set (non-zero) then iomemory-vsl init

script will not be

3. # used.

# ENABLED=1

4. After:5. # If ENABLED is not set (non-zero) then iomemory-vsl init

script will not be

6. # used.

Page 3: Fusion Drive

ENABLED=1

4. While editing /etc/sysconfig/iomemory-vsl, add the mountpoint to

the MOUNTS variable so it will be automatically attached and mounted:

1. Before:2. ...

3. # Example: MD_ARRAYS="/dev/md0 /dev/md1"

4. MD_ARRAYS=""

5.

6. ...

7. # Example: LVM_VGS="/dev/vg0 /dev/vg1"

8. LVM_VGS=""

9.

10. ...

11. # Example: MOUNTS="/mnt/fioa /mnt/firehose"

MOUNTS=""

12. After:13. ...

14. # Example: MD_ARRAYS="/dev/md0 /dev/md1"

15. MD_ARRAYS="/dev/md0"

16.

17. ...

18. # Example: LVM_VGS="/dev/vg0 /dev/vg1"

19. LVM_VGS="/dev/vg0"

20.

21. ...

22. # Example: MOUNTS="/mnt/fioa /mnt/firehose"

MOUNTS="/iomemory-vsl_mountpoint /iomemory-vsl_mountpoint2"

5. Verify the status of the init script. Make sure the ioMemory VSL script loads at run levels 1

through 5 (runlevel 0 is shutdown and runlevel 6 is reboot). Run the following commands:6.$ chkconfig iomemory-vsl on

7.$ chkconfig --list iomemory-vsl

iomemory-vsl 0:off 1:on 2:on 3:on 4:on 5:on 6:off

Using Init Scripts to Load the 1.2.x Driver

Page 4: Fusion Drive

1. Edit /etc/modprobe.d/iodrive, and uncomment the blacklist line:

1. Before:2. # To keep ioMemory VSL from auto loading at boot, uncomment

below

# blacklist fio-driver

2. After:3.# To keep ioMemory VSL from auto loading at boot, uncomment below

blacklist fio-driver

This keeps udev from automatically loading the driver

4. Edit /etc/fstab and add noauto to the options in the appropriate line in

the /etc/fstab file. This will keep the OS from trying to check the drive for errors on boot.

1. Before:2. ...

3. /dev/md0 /iomemory-vsl_mountpoint ext3 defaults 1 2

/dev/vg0/fio_lv /iomemory-vsl_mountpoint2 ext3 defaults 1 2

4. After:5. ...

6. /dev/md0 /iomemory-vsl_mountpoint ext3 defaults,noauto 0 0

/dev/vg0/fio_lv /iomemory-vsl_mountpoint2 ext3 defaults,noauto 0 0

5. Edit /etc/sysconfig/iomemory-vsl, and add the mountpoint to the MOUNTS variable, so

it will be automatically attached and mounted:

1. Before:2. ...

3. # Example: MD_ARRAYS="/dev/md0 /dev/md1"

4. MD_ARRAYS=""

5.

6. ...

7. # Example: LVM_VGS="/dev/vg0 /dev/vg1"

8. LVM_VGS=""

9.

10. ...

11. # Example: MOUNTS="/mnt/fioa /mnt/firehose"

MOUNTS=""

Page 5: Fusion Drive

12. After:13. ...

14. # Example: MD_ARRAYS="/dev/md0 /dev/md1"

15. MD_ARRAYS="/dev/md0"

16.

17. ...

18. # Example: LVM_VGS="/dev/vg0 /dev/vg1"

19. LVM_VGS="/dev/vg0"

20.

21. ...

22. # Example: MOUNTS="/mnt/fioa /mnt/firehose"

MOUNTS="/iomemory-vsl_mountpoint /iomemory-vsl_mountpoint2"

6. Verify the status of the init script. Make sure the iomemory-vsl script loads at run levels 1

through 5 (runlevel 0 is shutdown and runlevel 6 is reboot). Run the following commands:7.$ chkconfig iomemory-vsl on

8.$ chkconfig --list iomemory-vsl

iodrive 0:off 1:on 2:on 3:on 4:on 5:on 6:off