janus generic software manual - raidweb.com...1.2 raid terminology redundant array of independent...
TRANSCRIPT
SCSI(Ultra320)/FC(2Gb) to
Serial ATA II
Disk Array Systems
Version 1.5
Janus Generic Software Manual 42-30000-5086
MaxtronicTaipei, Taiwan
+886-2-22184875
http://www.maxtronic.com.tw/
@Copyright 2005 Maxtronic Inc. All rights reserved.
i
Table of Content
Part1
CCCCCCCChhhhhhhhaaaaaaaapppppppptttttttteeeeeeeerrrrrrrr 11111111 11--11
1.1 About this manual 1-1
1.2 RAID terminology 1-2
CCCCCCCChhhhhhhhaaaaaaaapppppppptttttttteeeeeeeerrrrrrrr 22222222 22--11
2.1 System diagnostic during boot cycle 2-1
2.2 LCD screen symbols (drive status) 2-3
2.3 Menu navigation and mute system beeper from front panel 2-4
2.4 Quick Setup – Flow chart and procedures 2-5
CCCCCCCChhhhhhhhaaaaaaaapppppppptttttttteeeeeeeerrrrrrrr 33333333 33--11
3.1 Advanced Setup - Flow chart and procedures 3-2
3.2 Deleting or reconfiguring an array 3-17
3.3 Deleting a slice of an array 3-18
3.4 Expanding an Array 3-19
3.5 Regenerating an arrays parity and performing a parity check 3-19
3.6 Configuring SAN masking for FC disk array 3-20
CCCCCCCChhhhhhhhaaaaaaaapppppppptttttttteeeeeeeerrrrrrrr 44444444 44--11
4.1 Disk Scrubbing – Bad block detection and parity correction 4-2
4.2 Auto Shutdown – RAID system in critical condition 4-7
4.3 Disk Self Test – Check drive healthy in RAID system 4-8
4.4 Disk Clone – Manually cloning a failing drive 4-10
4.5 SMART – Predicting a drive failure and automatic disk cloning 4-12
4.6 AV Streaming - Performance-critical streaming application 4-15
4.7 Disk Standby Timer – Hard Disk Spin Down 4-17
4.8 PreRead cache - Enhancing sequential read throughput 4-18
ii
4.9 Alignment Offset – Multipath IO S/W (PathGuard) on Window OS 4-20
CCCCCCCChhhhhhhhaaaaaaaapppppppptttttttteeeeeeeerrrrrrrr 55555555 55--11
5.1 Event Severity 5-1
5.2 Event List 5-2
AAppppeennddiixx II
Upgrading Firmware of the RAID System I
Pre-configured RAID parameters V
On-line and Off-line effective RAID parameters VI
Recording of RAID Configuration VII
CCuussttoommeerr ffeeeeddbbaacckk aanndd ccoonnttaaccttiinngg MMaaxxttrroonniicc tteecchhnniiccaall ssuuppppoorrtt XX
Part2
CCCCCCCChhhhhhhhaaaaaaaapppppppptttttttteeeeeeeerrrrrrrr 11111111 SSSSSSSSeeeeeeeettttttttttttttttiiiiiiiinnnnnnnngggggggg uuuuuuuupppppppp RRRRRRRRAAAAAAAAIIIIIIIIDDDDDDDD GGGGGGGGUUUUUUUUIIIIIIII 11--11
CCCCCCCChhhhhhhhaaaaaaaapppppppptttttttteeeeeeeerrrrrrrr 22222222 MMMMMMMMoooooooonnnnnnnniiiiiiiittttttttoooooooorrrrrrrriiiiiiiinnnnnnnngggggggg wwwwwwwwiiiiiiiitttttttthhhhhhhh RRRRRRRRAAAAAAAAIIIIIIIIDDDDDDDD GGGGGGGGUUUUUUUUIIIIIIII 22--11
AAAAAAAAppppppppppppppppeeeeeeeennnnnnnnddddddddiiiiiiiixxxxxxxx II
iii
Revision History
Version Date Remarks
1.0 Aug, 2005 Initial release
1.1 Dec, 2005 Modify
1.2 Nov, 2006 Add Part 2 - RAID GUI
1.3 Apr, 2007
Add contacts in 4.7 of Part1
Add contacts in p.2-22 of Part2 Modify Table of Content
1.4 Jul, 2007 Delete typo of Note that has “Appendix A”
1.5 Oct, 2007 Modify “4.7 Disk Standby Timer – Hard Disk Spin Down”
contents.
Part1
1-1
CCCCCCCChhhhhhhhaaaaaaaapppppppptttttttteeeeeeeerrrrrrrr 11111111
IInnttrroodduuccttiioonn
This chapter contains an overview of Maxtronic disk array generic software. It
contains the following topics:
� Section 1.1, “About this manual“
� Section 1.2, “RAID terminology”
1.1 About this manual
This manual has the following chapters and appendix:
Chapter 1, Introduction, describes RAID terminology and basic SCSI concepts.
Chapter 2, Quick Setup, introduces the procedure to create single array using the
front panel
Chapter 3, Advanced Setup, introduces advanced configuration flow to create and
delete multiple arrays and slices through RS-232 and RJ-45 port
Chapter 4, Using advanced functions of RAID system, contains proprietary RAID
functions and configuration flow and condition.
Chapter 5, Event messages of the RAID System, lists the message recorded by
RAID system.
Appendix, Upgrade firmware, pre-configured, online and offline effective RAID
parameters.
1-2
1.2 RAID terminology
Redundant Array of Independent Disk (RAID) is a storage technology used to
combine multiple inexpensive drives into a logical drive to obtain performance, capacity
and reliability over single-disk storage.
RAID 0 - Disk Striping
In RAID 0, data is divided into pieces and written to all disks in parallel. This
process is called striping because the pieces of data form a stripe across multiple disks.
This improves access rate, but makes availability lower, since a single disk failure will
cause the failure of the array. A RAID 0 array is unsuitable for data that can not easily be
reproduced, or for mission-critical system operation.
Figure 1- RAID 0(Disk striping)
1-3
RAID 1- Disk Mirroring
In RAID 1, data is duplicated on two or more disks to provide high access rate and
very high data availability. This process is called mirroring. If a disk fails, the RAID
controller directs all requests to the surviving members.
Figure 2- RAID 1(Disk mirroring)
RAID 3- Disk Striping with dedicated parity
In RAID 3, data is divided into pieces and a single parity is calculated. The pieces
and parity are written to separate disks in parallel. The parity is written to a single
dedicated disk. This process is called striping with dedicated parity. The parity disk
stores redundant information about the data on other disks. If a single disk fails, then the
data can be regenerated from other data and parity disks.
Figure 3- RAID 3(Disk striping with dedicated parity)
1-4
RAID 5- Disk Striping with distributed parity
In RAID 5, data is divided into pieces and a single parity is calculated. The pieces
and parity are written to separate disk in parallel. The parity is written to a different disk
in each stripe. Parity provides redundant information about the data on other disks. If a
single disk fails, then the data can be regenerated from other data and parity disks.
Figure 4- RAID 5(Disk striping with distributed parity)
1-5
RAID 6- Disk Striping with two sets of distributed parities
In RAID 6, data is divided into pieces and two sets of parities are calculated. The
pieces and parities are written to separate disks in parallel. The two parities are written
to different disks in each stripe. Parity provides redundant information about the data on
the RAID member disks. If two disks fail at the same time, the data can still be
regenerated from other data and parity disks.
The RAID 6 algorithm uses two independent equations to compute two sets of
parity, which enable reconstruction of data when two disks and/or blocks fail at the same
time. It greatly improves the data availability.
Figure 5- RAID 6 (Disk striping with two sets of parity- P and Q)
1-6
RAID TP(Triple Parity) - Disk Striping with triple distributed parity
In RAID TP, data is divided into pieces and three sets of parities are calculated.
The data pieces and the parities are written to separate disks in parallel. The three
parities are written to different disks in each stripe. Parity provides redundant
information about the data on the RAID member disks. If three disks fail at the same
time, the data can still be regenerated from other data and parity disks.
The RAID TP algorithms use three independent equations to compute triple parity,
which enable reconstruction of data when three disks and/or blocks fail at the same time.
It greatly improves the data availability.
Figure 6- RAID TP (Disk striping with triple parity – P, Q and R)
JBOD – Just a Bunch of Disks
JBOD stands for just a bunch of disks. In JBOD mode, the host will see each drive
as an independent logical disk. There is no fault-tolerance in JBOD.
Figure 7- JBOD (Just a Bunch Of Disks)
1-7
NRAID – Non-RAID
In NRAID mode, all drives are configured as a single logical drive without
fault-tolerance. The total capacity of NRAID will be the sum of each drives.
Figure 8- NRAID (Non-RAID)
RAID 0+1 – Disk striping with mirroring
RAID 0+1 is a combination of RAID 0 and RAID 1 to form an array.
RAID 30 – Striping of RAID 3
RAID 30 is a combination of RAID 0 and RAID 3 to form an array. It provides better
data redundancy compared with RAID 3.
RAID 50 – Striping of RAID 5
RAID 50 is a combination of RAID 0 and RAID 5 to form an array. It provides better
data redundancy compared with RAID 5.
1-8
Summary of RAID Levels
The following table provides a brief overview of RAID levels. A high data availability
number indicates high fault tolerance.
RAID Level Description Capacity Data Availability Minimum
RAID 0 Disk striping N 0 1
RAID 1 Disk mirroring N/N 5 2
RAID 3 Striping with dedicated N-1 1 3
RAID 5 Striping with distributed N-1 1 3
RAID 6 Striping with 2 sets of N-2 3 4
RAID TP Striping with triple parity N-3 4 4
RAID 0+1 Disk striping with N/2 2 4
RAID 30 Striping of RAID 3 N-2 2 6
RAID 50 Striping of RAID 5 N-2 2 6
NRAID Non-RAID N 0 1
JBOD Just a bunch of disks N 0 1
SCSI specification
The RAID system supports standard SCSI specification as below. Note that it’s
recommended to use an external SCSI cable as short as possible.
SCSI Type Data bit Data Rate Max Cable
Length
Max SCSI devices
SCSI-1 8 5 MB/s 6 m 7
Fast SCSI 8 10 MB/s 3 m 7
Fast Wide SCSI 16 20 MB/s 3 m 15
Ultra SCSI 8 20 MB/s 1.5 m 7
Ultra Wide SCSI 16 40 MB/s 1.5 m 15
Ultra 2 SCSI 8 40 MB/s 12 m 7
Ultra 2 Wide
SCSI
16 80 MB/s 12 m 15
Ultra 160 16 160 MB/s 12 m 15
Ultra 320 16 320 MB/s 12 m 15
2-1
CCCCCCCChhhhhhhhaaaaaaaapppppppptttttttteeeeeeeerrrrrrrr 22222222
QQuuiicckk SSeettuupp
This chapter introduces a simple way to configure the RAID system. It is written for
those who use Maxtronic RAID system for the first time and want to create an array
quickly. It includes these topics:
� Section 2.1, “ System diagnostic during boot cycle ”
� Section 2.2, “ LCD screen symbols (drive status)”
� Section 2.3, “ Menu navigation and mute system beeper from front
panel ”
� Section 2.4, “ Quick Setup - Flow chart and procedures ”
2.1 System diagnostic during boot cycle
Figure 9 displays a flow chart of the RAID system self-test on boot-up. During boot-up, the RAID system executes CPU, peripheral device, host and, disk chipset initialization. It consists of three modes (A, B, and C).
A - Initial RAID status
The RAID system has not been configured. Drives have not been installed;
there is no RAID configuration present in the hard drives and RAID controllers
NVRAM.
B - RAID system initializing
The RAID system is configured and drive initialization has begun.
2-2
C - RAID system exists
The RAID system has been configured. There is a RAID configuration present
in hard drives and RAID controllers NVRAM.
Figure 9 - System Diagnostic Flow
Note:
The RAID configuration is stored both in a reserved area of the hard drives
and RAID controller’s NVRAM after an array set is created.
2-3
2.2 LCD screen symbols (drive status)
The LCD screen shows an overview of RAID system drives status. This
section explains the meaning of symbols that appear on the LCD screen.
Symbol Meaning
R The drive has an error or a fault
I RAID system is identifying the drive
S Global hot-spare drive
X No drive installed or drive is offline
W Warning – Too many bad sectors or unrecoverable data blocks in the drive or drive triggered “SMART failure warning” after running Disk SMART test
A The drive is being added to an array either during online expansion or rebuilding
C A clone drive (target drive)
1~8 Array group number that a drive belongs to
J The drive is in JBOD mode ( No configuration mode)
Below are some examples.
Example Description LCD Screen
RAID initial status
� Model Name: SA - 6640S � Disk 1 to 16 are all in JBOD
mode ( No RAID configuration mode)
Two arrays with one global
hot-spare drive
� Model Name: SA- 6641S � Disk 1 to 9 are members of
Array1 � Disk 10 to 14 are members
of Array2 � Disk 15 is a global hot-spare
drive � Disk 16 is not installed or
offline
Array1@RAID Level 5
� Array 1 � RAID Level 5 � Disk 1 to 9 are members of
Array1
Array2 not created yet
� Array 2 � RAID Level X not available � No drive member exists
2-4
2.3 Menu navigation and mute system beeper from front
panel
The RAID system can be configured using the front panel function keys.
Menu Navigation from LCD Panel
Key Description
“UP” arrow key. To select options or scroll forward each character during data entry
“Down” arrow key. To select options or scroll back each character during data entry
ESC To escape or go back to previous menu screen
Enter To go to a submenu or to execute a selected option
Mute System beeper from LCD panel
The RAID system emits a beeping sound periodically when an error or
failure occurs in the disk array. This audible alert can be turned off temporarily
for the current event by pressing “UP” and “Down” key twice simultaneously.
The beeper will activate again if a new event occurs.
2-5
2.4 Quick Setup – Flow chart and procedures
This section provides a quick way to configure the RAID system with one-click
using the front panel. The RAID system will automatically create a single array
and map it to its first host channel.
Before you begin
It’s not recommended to connect the RAID system to the host computer before
completing configuration and initialization. If connected during configuration
process, resetting the controller could lead to occasional host server error
messages like parity error or synchronous error.
Note:
To ensure power redundancy, connect the power supplies to a
separate circuit, i.e. one to a commercial circuit and one to a UPS
(Uninterruptible Power Supply)
Using the unit without a UPS will greatly increase the chances of
data and RAID configuration corruption.
2-6
Quick Setup Flow
Figure 10 shows the quick setup from the front panel function keys.
Figure 10- Quick Setup Flow
2-7
Quick Setup Procedures
Step1. Insert drives into RAID system Make sure all of the drives are mounted
securely to the disk trays and insert disk trays
into disk slots.
Step2. Power ON RAID system
Note: After powering on. The LCD status
should display as seen to the right.
Ex: Model Name: SA-6640S
Step3. Enter password “0000” to login Main Menu from LCD
Press “Enter” key.
“Enter Passwd” message will appear on the
LCD screen.
Use “ ”or“ ” to select each character and
select “0000”, press “Enter” to proceed
Note: “0000” is the default password.
Step4. Select RAID level in Quick Setup
Go to “Quick Setup->RAID Level”
Use “ ”or “ ” key to select the desired level.
For example: 6+spare, Select “Yes” and
press ”Enter” to proceed.
Note: If a spare drive is not selected or reserved,
all of the installed disks will be configured as a
drive member of Array 1.
RAID level options in quick setup:
0/1/3/3+spare/5/5+spare/6/6+spare/TP/TP+spare
/0+1/30/50/NRAID
TP: Triple Parity allows three drives to fail in a
single array
Step5. Automatic system initialization
After setup, the system will begin to initialize.
It may take several hours depending on the
total capacity of the array.
Step6. Connecting host computer to 1st host channel of RAID system
Power off RAID system and connect 1st host
channel of RAID system to host computer.
Power on RAID, wait for the unit to completely
power on, then power on host computer.
2-8
Note: Before connecting RAID system(SCSI) to host server(step 6) 1. Check the SCSI/FC ID of RAID system and the HBA to ensure that
they are not sharing the same ID with another device on the same
channel. The default SCSI ID of RAID system is 0, Fibre ID setting
is Auto.
2. In a SCSI daisy chain, SCSI termination is required on the last SCSI
device. Make sure it is properly terminated on SCSI bus.
3. Before you disconnect or connect a SCSI cable from the RAID
system, power off the host computer, than the RAID system for
safety. Also note, SCSI bus does not support hot-plug function.
3-1
CCCCCCCChhhhhhhhaaaaaaaapppppppptttttttteeeeeeeerrrrrrrr 33333333
AAddvvaanncceedd SSeettuupp
The RAID system can be configured using a terminal emulation program, such as
HyperTerminal in Windows. This allows the user to create multiple RAID arrays and
slices via RS-232 or RJ-45 port. It includes the following topics:
� Section 3.1, “Advanced Setup - Flow chart and procedures”
� Section 3.2, “Deleting or reconfiguring an array”
� Section 3.3, “Deleting a slice of an array”
� Section 3.4, “Expanding an array”
� Section 3.5, “Regenerating an arrays parity and performing a parity check”
� Section 3.6, “Configuring SAN masking for FC disk array”
3-2
3.1 Advanced Setup - Flow chart and procedures
This section introduces the advanced setup flow chart and detailed procedures to
configure the RAID system for multiple array groups, slicing, and LUN mapping to host
channels.
There are three methods to configure the RAID system.
1. Front panel function keys (Refer to Chapter 2 – Quick Setup)
2. VT-100 emulation program via RS-232 serial port or RJ-45 LAN port
3. Cross-platform Global Net software via RJ-45 LAN port
Figure 11- Advanced Setup Flow
3-3
3.1.1 Setup the connection to RAID system
In this section, it introduces how to set up a VT-100 emulation program via RS-232 and
RJ-45 port.
1. Using RS-232 serial port
To set up the serial port connection, follow these steps. This example will use
Hyperterminal for Windows.
Step1.
Use a serial cable (null modem) to connect the RS-232 port of RAID system
to the management console (or host computer)
Step2. Start Hyper Terminal on the management console
For Windows, click
“Start->Programs->Accessories->
Communication”, then select “Hyper
Terminal”
The “Hyper Terminal” window appears and
if the “Location Information” window
appears, skip it Click “Cancel”->”Yes”->”OK”
Step3. Make a new connection
When “Connection Description” dialog box
appears, give the new connection a name
For example: Disk Array, Click “OK” to
continue
3-4
Step4. Select PC serial “COM” port
Click “Connect using” field and select
“COM1”, then click “OK”
Note:
If you are unsure which COM port to use or if the connection does
not work, repeat the above steps and try a different COM port.
Step5. Set the COM port properties
In “Port Setting” tab, select
Bit per second => 115200
Data bit => 8
Parity => None
Stop bits => 1
Flow control => None
Click “OK” to continue
3-5
Step6. Display RAID utility interface
Press <Ctrl>+<D> to display the main screen of RAID utility.
Tip: <Ctrl>+<D> can refresh the screen information
2. Using RJ-45 LAN port (Optional)
To set up the RJ-45 LAN port connection, follow these steps.
Step1. Use a RJ-45 cable to connect the RJ45 port of RAID system to an
ethernet switch
Step2. Start Hyper Terminal on the management console
Repeat the same procedure in RS-232 setup.
Step3. Make a new connection
Repeat the same procedure in RS-232 setup.
Step4. Select “TCP/IP (Winsock)”
connection
Click “Connect using” field
and select “TCP/IP (Winsock)”,
then click “OK”
Note: Not all of Maxtronic disk array
support TCP/IP (Winsock) via
VT-100 emulation program, Contact
Maxtronic support for more
information.
3-6
Step5. Assign IP address of RAID systemObtain IP address automatically
The default ethernet setting of RAID system is DHCP enabled. Connect
the RJ-45 port of RAID system to a dynamic (DHCP) network. An IP
address will be automatically assigned to the RAID system.
Use “Up” and “Down” arrow key to navigate the LCD screen to get the IP
address of the RAID system.
Assign IP address manually
The RAID system can also be assigned with a static IP address via LCD
manually.
1. Enter password “0000” from LCD panel to login in RAID utility
2. Use arrow key to go to “System Params->Ethernet Setup->DHCP”,
select “Disable”
3. Return to “Ethernet Setup” to set the network manually.
Use “Up” and “Down” arrow key to assign,
IP address: xxx.xxx.xxx.xxx
Netmask: xxx.xxxx.xxx.xxx
Gateway: xxx.xxx.xxx.xxx
Note: The default Ethernet setting of
RAID system:
IP address: 192.168.1.23
Netmask: 255.255.255.0
Gateway: 192.168.1.254
4. Key in the IP address of
RAID system in “Host
address” field of Hyper
Terminal
And set the “Port Number” to 4660.
Click “OK” to continue.
Step6. Display RAID utility interface
Repeat the same procedure in RS-232 setup.
3-7
RAID utility Interface
You will see the following RAID utility interface after you access the RAID
controller via the RAID system COM port or LAN port for the first time.
Use the following hot keys to explore the menu tree or execute RAID functions.
Tab Switch between “Main Menu” and “Output Area”
window
A Move cursor UP
Z Move cursor DOWN
S Scroll UP one page in output area
X Scroll DOWN one page in output area
ESC To escape or go back to previous menu screen
Enter Enter a submenu or to execute a selected option
Ctrl+D Display or re-flash menu screen
Note:
The default password to enter Main Menu of RAID system is
0000(four zeros)
3-8
Main Menu
The following figure is the main menu for SCSI/FC RAID system after logging into
the monitor utility using a VT-100 terminal program.
3.1.2 Setting real time clock(RTC) and checking drive health
In order to perform advanced RAID functions, such as scheduled disk scrubbing,
SMART disk cloning and recording the date and time of event messages, etc the
real-time clock must be setup.
Refer to Chapter 4 for detailed information about disk scrubbing, SMART.
Real Time Clock menu tree
Step1. Go to “Main Menu->System Params->RTC”
Step2. Select “Set RTC” to set the clock for RAID system in the format of
“MM/DD/YY W HH:MM”
Symbol Range Description
MM 01~12 Month
DD 01~31 Day
YY 05 Year
W 1~7 Day of the week
1: Monday; 2: Tuesday; 3:Wednesday; 4:
Thursday;
3-9
5: Friday; 6: Saturday; 7: Sunday
HH 01~24 Hour
MM 00~59 Minute
Checking drive health
Go to “Main Menu->Utility->Disk Utility->Disk Self Test” and run “Short Self
Test“ for all disks.
If the suspected disk fails to pass Short Self Test, it’s recommended to
replace a new drive or run Extended Self Test to further investigate drive
health. For more information about Disk Self Test, see Chapter 4, Section
4.3.
3-10
3.1.3 Creating and slicing arrays
Below is the flow chart to create array groups and divide it into slices.
Figure 12- Multiple Arrays & Slices Creation flow
3-11
Array Params menu tree
Array Params Yes/NoArray 1/2/3/4/5/6/7/8 RAID Level
Slice
Initialization Mode
Alignment Offset Slice 00~15
None
NTFS
0/1/3/5/6/TP/0+1/30/50/NRAID/None Disk 1~16
Slice 00~15(MB)
Stripe Size
Write Cache
1024/512/256/128/64/32/16/8 sectors
Auto/Write-Back/Write-Through
PreRead Setup Enable Max ReadLog
Max PreRead
Slice Over 2 TB
Disable
32~999
1~32
Enable Sector Size
16 Byte CDB
Disable
1024 Byte(4TB)
2048 Byte(8TB)
4096 Byte(16TB)
Forground/Background
Sector per Track 128/255
Expand Array Array 1/2/3/4/5/6/7/8 Select number of disk
The RAID system can be configured up to 8 array groups with different RAID level and
each array group can be divided into a maximum of 16 slices.
Follow the following steps to create and slice an array.
Step1: Choose the stripe size of RAID system
Go to “Main Menu->Array Params->Stripe Size” and select a stripe size
based on what kind of application behavior will be applied.
Recommendation:
1. Using the default stripe size, 128 sectors (64 KB), should be sufficient
for most applications.
2. Choose 256 sectors (128KB) stripe size if host application is mainly
sequential read/write IO
3. Choose 32 sectors (16KB) stripe size if host application is mainly
random read/write IO
Note:
Once the stripe size is selected, it will apply to the whole RAID system. All
arrays will use this specified stripe size.
3-12
Step2: Decide whether or not to create an over-2TB slice (Optional). Skip this
step if you do not wish to create a slice over 2 TB.
Currently, there are two ways to break 2 TB limitation per slice.
One is to change the sector size or enable 16 byte CDB(Command
Descriptor Block)
2.1 Go to “Main Menu->Array Params->Slice Over 2 TB” and select
“enable”
2.2 Select “Sector size” or “16 byte CDB”
Note:
1. Currently, “Sector size” option is supported for Microsoft Window
2000/2003/2003 SP1 “16 Byte CDB” option is supported for Window
2003 server SP1
2. If Slice Over 2 TB option is disabled, after initializing an array if the
capacity is over 2 TB, the RAID system will automatically create multiple
slices limited at 2TB.
Step3: Create multiple arrays groups
3.1 Go to “Main Menu->Array Params“, then select an array number
3.2 Go to “RAID Level”, and select a RAID level
3.3 Select drive members of the current array.
Note: Hot-spare disk is not a member of an array, reserving at least one
spare drive in a unit is recommended.
3.4 Press<Escape> until the main menu appears
3.5 Repeat the step 3.1~3.4 to create more array groups.
Step4: Creating slices of an individual array
4.1 Go to “Main Menu->Array Params“, then select the array you want to
divide into slices.
4.2 Go to “Slice”, and select the first slice - Slice 00, type the size in
megabyte(MB), then press <Enter>.
Note: The output area will display the slice size that has been created.
4.3 Repeat Step 4.2 to create the next Slice 01 until the array is divided as
planned
Note: It is not allowed to create slices randomly. Create slices in
ascending sequence, for example: Slice 00->Slice 01…->Slice 15
3-13
Step5: Setting initialization mode
Initialization mode has two options, foreground and background. With
foreground initialization, an array will be accessible after initialization is
completed. With background initialization, an array will be accessible
during initialization.
5.1 Go to “Main Menu->Array Params”, and select the array group
5.2 Go to “Initialization Mode”, and select a mode you prefer, for example:
background, the array can be accessed during initialization.
Note: It may take several hours to complete foreground initialization
depending on the total capacity of an array.It’s recommended to set
“foreground” mode to double check drive health during RAID system
initialization.
3.1.4 Assign SCSI/FC Channel ID & Mapping LUN to hosts
Every device requires a unique SCSI or Fibre channel ID. A SCSI chain can
support up to 15 SCSI devices with “Wide” function enabled. A Fibre loop can
support up to 125 FC devices. If there are multiple host computers that require
access to the same storage device (LUN) than there must be clustering or
multipath I/O software installed on these computers.
Note:
Maxtronic RAID also supports host-based multipath I/O software
PathGuard.
3-14
SCSI/Fibre Params menu tree
Assigning SCSI or Fibre Channel ID
Setting SCSI ID
1. Go to “Main Menu->SCSI Params”, then select a SCSI channel for
example: SCSI CH1
2. Select “Set SCSI ID”, then select a SCSI ID
3. Repeat Step 1~2 to set another SCSI Channel ID
Setting Fibre ID
1. Go to “Main Menu->Fibre Params”, then select a Fibre channel for
example: FC CH1
2. Select “Set Loop ID”, then select “Auto” or key in a number (0~125)
manually.
3. Repeat Step 1~2 to set another Fibre Channel ID
3-15
Note:
1. Check the SCSI/FC ID of RAID system and the HBA to ensure that
they are not sharing the same ID on the same channel.
The default SCSI ID of RAID system is 0, Fibre ID setting is Auto.
2. In a SCSI daisy chain, SCSI termination is required at the last SCSI
device. Make sure it is properly terminated on the SCSI bus.
3. QAS(Quick Arbitration Select) setting is required for Ultra 320
devices. If the QAS setting on the RAID system differs from the
HBA, there will be problems accessing the RAID unit from the host.
Refer to the following table to properly set QAS option in SCSI
Params. The RAID system and HBA require the same QAS setting.
HBA vendor QAS setting
Adaptec/ATTO Enable(Default)
LSI Disable
Mapping LUN(s) to a host channel.
LUN mapping is the process to make a slice visible to a desired host channels.
Each LUN will appear as a storage device to the host computer.
1. Go to “Main Menu”, then select “SCSI Params” for SCSI RAID or “Fibre
Params” for Fibre RAID
2. Select a specific host channel, for example: SCSI CH1 or Fibre CH1
3. Go to “Lun Map”, and select a LUN number.
4. Select an array, then select the desired slice to map to the chosen LUN
Ex. SCSI Params -> SCSI CH1 -> LUN Map -> LUN 0 -> Array 1 -> Slice0
This will map Slice 0 of Array 1 to LUN 0 of SCSI Channel 1
Note:
i. The same slice may be mapped to multiple LUNs on separate host
channels, but is only applicable for clustering or multipath environments.
ii. Selecting “Disable” will delete a LUN map. Deleting a LUN map will not
delete the data contained in the array or slice.
5. Press the <Escape> key to return to the Main menu
6. Repeat step 1~5 for each slice until all slices are mapped to a LUN
3-16
3.1.5 Save Configuration & System initialization
Go to “Main Menu->Save Config” and select “Save & Restart”, select “Yes” to
complete RAID configuration.
Note:
The RAID configuration is stored in a reserved area of the hard drives and the
RAID controller’s NVRAM after the array is created.
After the RAID system reboots, it will enter system initialization.
It may take several hours to complete depending on the total capacity of an array.
3.1.6 Connecting the RAID system to a host computer
Power off the RAID system and connect to a host computer or FC switch
Power on the RAID system, after it has completely powered on, power on the host
computer
Note: The host computer should be the last device to power on.
Summary of Advanced Setup
Step 1 Insert drives into RAID system
Step 2 Power ON RAID system
Step 3 Setup the connection to RAID system (via RS-232 or RJ-45)
& Setting real time clock
Step 4 Checking drive health(Short Self Test)
Step 5 Creating and slicing arrays
Step 6 Assign SCSI or FC channel ID and mapping LUNs to a host channel
Step 7 Save configuration & system initialization
Step 8 Connecting the RAID system to a host computer
3-17
3.2 Deleting or reconfiguring an array Before deleting or reconfiguring an array backup all required data. To reconfigure
an array with different drive members, RAID level, or stripe size, delete the existing
array and then reconfigure the array.
To delete an array, follow these steps:
Step1. Go to “Main Menu->Array Params”, and select the array you want to
delete
Step2. Go to “RAID Level”, and select “None”
Step3. Select “Yes” to proceed
Step4. Go to “Main Menu->Save Config” and select “Save to NVRAM”. The
RAID system will not automatically reboot, and the array will be
deleted immediately.
Caution:
Deleting an array will destroy all data contained in that array.
3-18
3.3 Deleting a slice of an array Follow these steps to delete a slice
Step1. Go to “Main Menu->Array Params”, and select the desired array
Step2. Go to “Slice”, and select the last slice in the array, for example: Slice 02
Step3. Type “ 0 ” (MB) to delete Slice 02\
Step4. Return to “Slice” then select “Slice 01”, type “ 0 ” (MB) to delete Slice 01
Step5. Go to “Main Menu->Save Config”, then select “Save to NVRAM” to take
effect the change. Delete slices in a descending sequence.
Similarly, you can follow the same steps to change the slice size.
Note: The use of a third party storage resource management (SRM)
software or an OS file management program to divide or stripe a slice
may lead to data fragmentation that causes decreased I/O performance
of the RAID system.
Caution:
Deleting a slice will lose all data contained in that slice. Backup
your data before deleting or changing the slice size
3-19
3.4 Expanding an Array Follow these steps to expand an array.
Step1. Go to “Main Menu->Array Params->Expand Array”, and select the
desired array
Step2. The disk numbers will then prompt, select a number that will add to the
array
RAID system then enters on-line expanding process automatically
After array expansion is complete, a new slice will be created from the
added capacity.
Refer to Section 3.1.4 to map the new slice to the desired host
channel/LUN.
Note:
Expanding an array will impact system performance, performing
array expansion during off-peak time is recommended.
3.5 Regenerating an arrays parity and performing a parity
check
RAID parity might become inconsistent with data after extended periods of time.
Users can re-generate array parity or perform a parity check to ensure data integrity.
Follow the following steps to expand an array.
Step1. Go to “Main Menu->System Params“ then select “Init Parity” to
regenerate RAID parity or “Parity Check” to verify the parity
consistency.
Step2. Select the desired array, then select “Start” to proceed
Note: After the RAID system starts parity check if a parity inconsistency is
detected, the parity check will stop and report a discrepancy.
Refer to Chapter 4, section 4.1 about disk scrubbing to correct parity
errors.
Note:
Init Parity and Parity Check can only be performed when RAID
system is in an optimal condition.
3-20
3.6 Configuring SAN masking for FC disk array
SAN masking is a RAID system-centric enforced method of masking multiple
LUNs behind a Fibre channel port. As Figure 13 shows, with SAN masking, a single
large RAID device can be subdivided to serve or block a number of different hosts that
are attached to the RAID through the SAN fabric. The host servers that access a
certain LUN through a particular port can be masked from accessing other LUNs
through the same port.
SAN masking can be setup on the RAID system or the host computers HBA.
Masking a LUN at the device level is more secure than masking at the host computer,
but not all RAID systems have LUN masking capability; therefore, some HBA vendors
allow persistent binding at the driver-level to mask LUNs.
Figure 13- SAN Mask example
3-21
SAN Mask menu tree
Fibre Params Enable/DisableSAN Mask Supporting FC CH1/CH2
SAN Mapping
Edit WWN Tbl
View WWN Tbl
View Mapping
Host 1~ Host 8
Host # (9~32)
FC CH1/CH2
FC CH1/CH2 Host 1~ Host 8
Host # (9~32)
LUN0 ~ LUN7 Yes/No
LUN# (8~127) Yes/No
LUN0 ~ LUN7 Yes/No
LUN# (8~127) Yes/No
Follow the following steps to enable SAN mask.
Step1. Go to “Main Menu->Fibre Params->SAN Mask->Supporting”, and select
the desired Fibre channel, for example: FC CH1, then select “Disable“
Note: The default setting, Enable, allows all SAN host computers to
access all LUNs via the fibre channel port.
Step2. Go to “SAN Mapping->FC CH1->Host 1”, select “LUN0”, and select “Yes”.
Step3. Repeat Step 2 to map,
“Host 2->LUN1”, “Host 3->LUN2”, “Host 4->LUN3”, “Host 5->LUN4”
Step4. Go to “Edit WWN Tbl”, then type 8 byte WWN of FC HBA which is installed
in each host server. Refer to your FC HBA documentation for more details.
4-1
CCCCCCCChhhhhhhhaaaaaaaapppppppptttttttteeeeeeeerrrrrrrr 44444444
UUssiinngg aaddvvaanncceedd ffuunnccttiioonnss ooff RRAAIIDD ssyysstteemm
This chapter further introduces the advanced RAID functions. It covers the following
topics:
� Section 4.1, “Disk Scrubbing – Bad block detection and parity
correction”
� Section 4.2, “Auto shutdown – RAID system in critical condition”
� Section 4.3, “Disk Self Test – Drive health test in RAID system”
� Section 4.4, “Disk Clone – Manually cloning a failing drive”
� Section 4.5, “SMART- Predicting a drive failure and automatic disk
cloning”
� Section 4.6, “AV Streaming – Performance-critical streaming
application”
� Section 4.7, “Disk Standby Timer – Hard Disk Spin Down”
� Section 4.8, “Pre-Read cache – Enhance sequential read throughput”
� Section 4.9, “Alignment Offset – MultiPath I/O Software (PathGuard) for
Windows OS”
4-2
4.1 Disk Scrubbing – Bad block detection and parity
correction
Objective
With the increasing capacity size of hard drives, storage subsystem vendors
face the challenge of handling bad blocks and parity errors. Bad sectors can form on
HDD areas that are not accessed for long periods of time. These problems may lead
to unrecoverable data loss. In order to effectively solve the problem and improve
data availability disk scrubbing (DS) was developed. DS can scan for bad sectors
and/or parity errors in a RAID array. The RAID system reconstructs bad sectors
from other sectors and re-assigns it to an undamaged area. At the same time it also
detects parity inconsistency; users can decide whether or not to overwrite
inconsistent parity. DS is a proactive approach to address data integrity, it maintains
the RAID system in a good condition.
Unrecoverable data loss
As Figure 14 shows, although all of disks, disk #1~4, are online at t=t0, block
number D3 is already a bad sector. Even after the rebuild process is completed at
t=t3. Data block D1 will not be able to successfully regenerate from the other data
and parity blocks.
Figure 14- Unrecoverable data
4-3
Parity inconsistency
Over long periods of time, parity blocks may not be consistent with data blocks.
This may result from unexpected power outages or resetting the RAID system
before cached data is written to drives. If the parity inconsistency is detected, it
indicates a data error exists either on the data disk or parity disk. However, the RAID
system can not determine if the error resides on data or parity disks because of the
RAID algorithm. Enabling “Overwrite Parity” in disk scrubbing will automatically
correct data on the parity disk whenever parity inconsistency is detected.
If the array’s parity is seriously damaged, with overwrite parity enabled, data
loss may occur after disk scrubbing is completed. Disable it if the parity data has
been seriously damaged.
Figure 15 describes the detailed flow of disk scrubbing (DS). When DS is
running, the controller will read data and parity blocks of a stripe in an array and
execute a parity check. DS predicts data block failure and corrects parity errors in
order to enhance data integrity and availability.
Figure 15 function description of disk scrubbing
4-4
Note:
1. Disk scrubbing can only be activated when the array is in
optimal condition. This means there are no drive member
failures in the array and no background task in progress, i.e.
array expansion etc.
2. Disk Scrubbing will impact I/O performance; running DS during
off peak times is recommended.
3. If the RAID system is powered off while DS is running, it will not
resume on the next power up.
Follow the following steps to configure disk scrubbing.
Figure 16 Disk Scrubbing Configuration Flow
4-5
Disk Scrubbing Menu Tree
Enabling Overwrite parity option is recommended. Overwrite parity option applies to
Disk Scrubbing in both manual and scheduled mode.
Manual Scrubbing
Follow these steps to manually start and stop disk scrubbing,
Step1: Go to “Main Menu->Utility->System Utility->Disk Scrubbing” and select
Manual Scrubbing
Step2: Select All arrays or a single array
Step3: Select Start or Stop.
Once started, the percentage of progress is indicated on the LCD screen.
Step4: Repeat the above steps to start or stop scrubbing for other arrays groups.
4-6
Scheduled Scrubbing
Follow these steps to schedule disk scrubbing.
Note: Enable RAID system clock in advance before setting up the schedule scrubbing,
or scrubbing will not be able to activate. Refer to section 3.1.2 to setup system real time
clock (RTC).
Step1: Go to “Main Menu->Utility->System Utility->Disk Scrubbing” and
select Schedule Scrubbing
Step2: Select all arrays or a single array
Step3: Select Schedule ON
Step4: Select the preferred cycle to run disk scrubbing periodically.
For example: Once per 4 weeks
Step5: Select the Day of the week, for example: Sat
Step6: Manually key in the hour of 24 hour clock, for example: 00 hour
Step7: Repeat step1~6 to set up scrubbing schedule for other arrays groups.
Disk Scrubbing Report
After disk scrubbing is completed, in the output window of hyper terminal, the following
information will be displayed. For example:
------------------------------------------------------------------------------------------------------------
Disk Scrubbing Result:
--1. Bad Block Check--
Disk # 1: Found 3 Bad Blocks, Recovered 2, Total 10+(3) Bad Blocks
Disk # 2: Found 6 Bad Blocks, Recovered 6, Total 0+(6) Bad Blocks
Disk # 3: Found 1 Bad Block, Recovered 1, Total 12+(1) Bad Blocks
………………………………………………………………….
Disk # 16:Found 0 Bad Blocks, Recovered 0, Total 19+(0) Bad Blocks
--2. Array Parity Check (Overwrite Parity YES) --
Array X: Found 3 Parity Errors, Overwrite Parity
Or
--2. Array Parity Check (Overwrite Parity NO) --
Array X: Found 3 Parity Errors, Overwrite Parity-NONE
----------------------------------------------------------------------------------------------------------
Description of Disk Message: Disk # a : Found b Bad Blocks, Recovered c, Total x + (b) Bad Blocks a is the disk number, b is the number of bad blocks found during this session of scrubbing, c is the number of bad blocks recovered x is the total number of bad blocks.
4-7
4.2 Auto Shutdown – RAID system in critical condition
Objective
The RAID system will be protected against internal overheating condition or
UPS AC power loss. In the RAID system, there are several thermal sensors located
on the RAID controller and midplane. If the temperature increased to a dangerous
level in the RAID system, it may damage internal components including disk drives.
With auto shutdown enabled, if overheating occurred or AC power loss is detected,
the RAID system will shutdown after a given time duration.
Under the following conditions, auto shutdown will activate automatically if it is
enabled.
1. System temperature exceeds threshold
2. All fans failure or not available
3. UPS AC Power Loss
Auto shutdown menu tree
Note: System real time clock (RTC) must be activated in order for the auto shutdown
feature function properly.
Follow the steps to set up auto shutdown,
Step1: Go to “Main Menu->Utility->System Utility->Auto Shutdown->Enable“
Step2: Select “Event Trigger” and enable or disable each trigger option.
Step3: Select “Shutdown duration”, for example: 15 min, RAID system will turn off
the power automatically after 15 minutes if shutdown event has triggered.
Note:
1. If the event triggers are disabled or critical events return to the
normal, auto shutdown will be inactivated or canceled.
2. If an auto shutdown event is triggered, the write cache will
change from “Auto” to “Write-through” mode to ensure data
integrity.
4-8
4.3 Disk Self Test – Check drive healthy in RAID system
Objective
Disk Self Test (DST) is used to test the health of disks with them installed in the
RAID system. Prior to DST a user would have to remove disks individually and run
a vendor proprietary disk utility in a separate host computer DST predicts the
likelihood of near-term HDD degradation or fault condition.
DST performs write test, servo analysis and read scan test on the disks.
Follow the following steps to check disk health condition in RAID system,
Run
Short DST
Pass or Fail?
Run
Extended DST
Pass or Fail?
Replace with
new drives
Pass
Fail
Pass
Fail
Disk Self Test
Create Array & Slice
Figure 17 Disk Self Test Configuration Flow
Disk Self Test menu tree
4-9
Disk Self Test:
Step1: Go to “Main Menu->Utility->Disk Utility->Disk Self Test“
Step2: Select “Short Self Test” and select “All Disks” or “Disk X” to start
drive self test
Step3: Select “Extended Self Test”, if there is any error occurrence after
short DST (Step2)
Then select “All Disks” or “Disk X” to further check the suspected drive.
Swap the suspected drive if it doesn’t pass Extended Self test.
Note:
1. Running DST before creating an array is recommended.
DST will not overwrite data.
2. DST can only be executed in offline mode. This means that if
there is any host activity DST will be terminated and host I/O
access will resume.
3. DST can also be performed thru LCD function key directly. Key
in password- “1111” to start short or extended self test of all
drives.
4. It may take several hours to run Extended DST depending on the
drive capacity and spindle speed.
5. Most newer hard drives support DST, contact your drive vendor
to see whether your drives support DST.
4-10
4.4 Disk Clone – Manually cloning a failing drive
Objective
Hard drives are the most likely component to fail in a RAID system, and are
very difficult to predict when the failure will occur. When a failure does occur the
RAID unit will have to regenerate data from the non-failed hard drives to rebuild a
new drive, and during this time the RAID system will be in degraded mode. This is
where Disk Cloning (DC) can aid a user. Disk Clone can copy a failing drive to a hot
spare, and upon completion of cloning, the new cloned disk can take the position of
the failing disk immediately or can stand-by until the cloned disk fails. Disk cloning is
to help prevent a rebuild from ever occurring and having the unit in degraded mode.
There are two options to clone a failing drive: “permanent clone” and “swap
after clone”.
In "permanent clone" mode, the clone disk (hot spare disk) will be the mirror of
the source disk until the source disk has failed. The clone disk will then replace the
source disk.
In "swap after clone" mode, immediately after the clone process is complete the
clone disk replaces the source disk and the source disk is taken offline.
Disk Clone menu tree
4-11
Follow the steps to start disk cloning manually,
Step1: Go to “Main Menu->Utility->Disk Utility->Disk Clone“, then select “Start
Disk Clone”
Step2: Select “Source Disk” which is the suspected failing drive in the array.
Step3: Select “Target Disk” and select a target drive. (Clone drive)
Note: Only the hot-spare drive will be displayed in target disk.
Step4: Select “Start Permanent Clone” or “Start Swap After Clone” to start disk
clone.
After DC is complete, the target disk (clone) status will be marked with a “C”
on the LCD panel.
Step5: Repeat these steps to clone other drives.
Note:
1. If cloning is in progress, and the source disk fails or goes offline,
the cloning disk will replace the source disk and become array’s
member drive. Also the RAID system will begin rebuilding at the
point where cloning stopped.
2. If cloning is in progress, and a member drive fails in an array,
excluding the source disk, the cloning will stop and the RAID
system will begin rebuilding.
3. Disk clone can only be performed while the array in an optimal
condition.
4-12
4.5 SMART – Predicting a drive failure and automatic disk
cloning
Objective
Disk Clone (DC) is a process of manually cloning data from a source disk to
target disk. With the SMART function, the RAID system monitors drive health on
preset polling intervals, if hard drive degradation is detected or the user-defined bad
sector threshold is reached, the cloning function will begin immediately.
SMART Event Trigger
There are two SMART event triggers that will begin disk cloning: a SMART
failure flag, and a user-defined bad sector threshold. The SMART failure flag is
triggered by the drive, and is defined by vendor-specific attributes that may differ
model to model. The user-defined bad sector threshold is a specific number of bad
sectors per drive. The user must input the bad sector threshold to start disk cloning.
SMART Mode
There are four modes in SMART function.
1. Disable:
SMART function is inactivated.
2. Enable ( Alert Only) :
The RAID system monitors using drives SMART on preset time intervals. When
a SMART failure is detected, the user will be alerted with a beeper and the
drive’s status will be changed to a “W” on the front LCD which indicates a
warning
3. Enable ( Permanent Clone) :
The RAID system monitors drives SMART, if a SMART failure is detected or
user-defined bad sector threshold is reached, disk clone will begin. Upon
completion, the clone disk (hot spare disk) will be the mirror of the source disk
until the source disk has failed.
4. Enable ( Swap After Clone) :
The RAID system monitors drives SMART, if a SMART failure is detected or
user-defined bad sector threshold is reached, disk clone will begin. Immediately
after cloning has completed, the clone disk (hot spare disk) replaces the source
disk and the source disk is taken offline.
4-13
Follow the following steps to configure SMART function,
4-14
SMART menu tree
Follow the steps to configure SMART function,
Step1: Go to “Main Menu->Utility->Disk Utility->SMART“, and select “Test Disk
SMART”
First, check whether or not your drives support SMART. If your drives don’t
pass SMART test, drive’s status will change to “W” on the LCD screen.
Step2: Go to “Bad Sector” and decide the value of “Threshold for Clone” or
“Threshold for Swap”
For example: Threshold for Clone: 130, if a drive accumulates 130 bad
sectors, disk clone will start. Threshold for Swap: 200, if a drive
accumulates 200 bad sectors, the source disk will be taken offline.
Step3: Go to “Disk Check Time” then select a time interval to monitor drive’s
SMART and bad sector status. For example: 60 minutes
Step4: Go to “SMART Mode”, then select the mode you prefers.
Note:
1. Make sure system clock is enabled before configuring SMART
function.
2. If bad sector threshold is disabled for Clone and Swap, disk clone
will only be activated when drives a SMART failure is detected.
4-15
4.6 AV Streaming - Performance-critical streaming
application
Objective
Bad blocks and read or write delays of drives are unavoidable in a RAID system.
For AV streaming applications, such as broadcast, post production, video/audio
editing application, etc., these errors will cause choppy audio and/or video frame
loss. In some instances the entire RAID system will stop operation.
Enabling AV Streaming option in the RAID system can eliminate the chance of
data transfer delays in a performance-critical streaming application. AV stream
option will shorten drive I/O response time, re-arrange cache buffer management for
read/write commands and changes the algorithm to read/write data.
Only enable the AV Streaming option after it has been tested in a real AV
streaming environment by an experienced engineer.
AV Streaming menu tree
Follow the steps to configure AV Application,
Step1: Go to “Main Menu->Utility->System Utility->AV Streaming“, then select
“Enable”
Step2: Go to “Disk Timeout” and then select a disk I/O timeout value, for example:
3 seconds
Note: Once AV streaming option has been enabled and disk timeout has
been changed to a low value, the RAID system will frequently alert
remapped blocks in the status log.
Step3: Go to “Remap Threshold” and then select a threshold that the RAID
system will start to alert remap warning message.
4-16
Note:
1. AV streaming can only be enabled for a single array configuration.
If multiple arrays are present in the RAID system, AV streaming will not
work.
2. Single array and single slice is the optimal RAID configuration for AV
streaming. Partitioning an array into multiple slices for AV streaming is
not recommended.
4-17
4.7 Disk Standby Timer – Hard Disk Spin Down
Objective
Disk Standby Timer command makes hard disk to go into idle state when no I/O requests are
directed to the hard disks within a selected period of time. Allowing the disks to spin only when
needed greatly reduces power consumption and thereby decreases operational cost and may
extend the MTBF of a hard drive.
Follow the steps to configure Disk Standby Timer,
Step1: Go to “Main Menu->Utility -> Disk Utility -> Disk Standby Timer “.
Step2: Option items: Disable (default), 5 mins, 10 mins, 15 mins, 30 mins, 60
mins, 90 mins and 120 mins. Select proper period time.
Note:
Restriction: firmware will reject "Disk Standby Timer” operations under
below conditions.
a. AV Streaming function be enabled.
b. Disk timeout value less than 7 seconds
4-18
4.8 PreRead cache - Enhancing sequential read
throughput
Objective
PreRead cache is used to accelerate the performance of applications that
access data sequentially, such as film, video, medical imaging and graphic
production industries.
With PreRead cache enabled, the RAID controller move to cache the next
blocks of data that will be needed in the sequence. It reads the data from slow,
nonvolatile storage and places it in fast cache memory before it is requested.
Only enable PreRead cache after it has been tested in a read-intensive
application environment by experienced engineer.
Move sequential block
data to RAID’s Cache
in advance
Move Pre-Read data
to Host from RAID’s
Cache
4-19
PreRead menu tree
Follow the steps to configure PreRead function,
Step1: Go to “Main Menu->Array Params->PreRead Setup”, then select
“Enable”
Step2: Select “Max ReadLog” and key in a number, for example: 32
Max Readlog is the record of Read commands that were issued from a
host application.
Step3: Select “Max PreRead” then key in a number for example: 16
Max Preread is the data depth that will be read ahead in advance.
Note: If PreRead is not properly setup, it will decrease the I/O performance of the
RAID system.
4-20
4.9 Alignment Offset – Multipath IO S/W (PathGuard) on
Window OS
Objective
Operating systems (OS) reserves private information, known as a signature
block, at the beginning of a LUN. The result is an un-alignment of disk striping. As
Figure-18 shows, after a physical device is formatted with a file system, a data
segment may cross two stripes causing a split I/O command to complete a read or
write.
Alignment offset is used to set the host logical block address (LBA) alignment
with the stripe boundary of a LUN so that it enhances I/O performance of the RAID
system. In order to fix this problem and enhance system performance under
different operating systems, the LUN should be offset based on its file system type.
Setting alignment offset is recommended when using PathGuard, Maxtronic’s
Multipath I/O software, for Windows operating system.
FS data
Stripe size
Starting
LBA of LUN
...LUN
Stripe size Stripe size
FS
Stripe size
Data
Starting
LBA of LUN
...LUN
Stripe size Stripe size
data
Stripe size
Starting
LBA of LUN
...LUN
Stripe size Stripe size Offset
Physical device
Two I/O commands
to read/write data
One I/O command
to read/write data
Before setting alignment offset
After setting alignment offset
Figure 18 Alignment Offset
4-21
Alignment offset menu tree
Follow the steps to configure alignment offset,
Step1: Go to “Main Menu->Array Params->Array x->Alignment Offset”, then
select a slice
Step2: Select “NTFS” if the slice will be formatted with Window NT file system
Note:
1. Currently, only Microsoft Window NT file system (NTFS) supports
the alignment offset function.
2. Alignment offset should be configured before the slice is created.
3. PathGuard is a host-based multipath I/O software for Window
2000/2003 server. Contact technical support team for more
information.
5-1
CCCCCCCChhhhhhhhaaaaaaaapppppppptttttttteeeeeeeerrrrrrrr 55555555
EEvveenntt MMeessssaaggee ooff RRAAIIDD ssyysstteemm
This chapter lists the event message recorded by the RAID system. It contains the
following topics:
� Section 5.1, “Event Severity”
� Section 5.2, “Event List”
5.1 Event Severity
Events are classified with different severity levels.
1. Error - Event messages that indicates a significant problem, such as drive/fan/power failure...etc
2. Warning – Event messages that are not necessarily significant, but might indicate a possible future problem
3. Information – Event messages that describes a successful operation of RAID function.
5-2
5.2 Event List
1. Event severity: Error
DISK X initial fail, status 0xY ! Disk X initialization failed with status Y
DRAM TEST FAIL DRAM diagnostic test failed
Disk X retry SPIN_TIMEOUT Failed to retry operation to disk X within SPIN_TIMEOUT. The disk was offlined.
Disk X initial SPIN_TIMEOUT Disk X could not be ready within SPIN_TIMEOUT. The disk was offlined.
ERROR: Disk X Identify Data Error!
Failed to identify disk X. The disk was offlined.
ERROR: Disk X Inquiry data ERROR !
Invalid inquiry data on disk X. The disk was offlined.
ERROR: Disk parameters ERROR !
Invalid Cylinder/Head/Sector disk parameters found. The disk was offlined.
ERROR: No multi-sector mode !
Disk did not support multi-sector mode. The disk was offlined.
ECC Error Detected at Address 0xX
One or more bits error were detected by ECC memory. The faulty address is at X. If there are more than one bit errors, the system hangs and the LCD shows "ECC MultiBit Err".
Error: spin IOC_READY timeout
SCSI chip initialization failed. Controller fault.
Error: No FreeChain. MX OY Magic: running out of scatter-gather resource on SCSI
Host Channel X Init Fail! Host channel X initialization failed. Controller fault.
Issue IOC Fact failed! SCSI chip initialization step failed. Controller fault.
Issue IOC Init failed! SCSI chip initialization step failed. Controller fault.
INIT: EnablePost X failed SCSI chip initialization step failed. Controller fault.
IOC reset failed SCSI chip initialization step failed. Controller fault.
IOC handle ready state failed SCSI chip initialization step failed. Controller fault.
INIT: CmdBufferPost failed SCSI chip initialization step failed. Controller fault.
INIT: EnableEvents failed SCSI chip initialization step failed. Controller fault.
INIT: EnablePost X failed SCSI chip initialization step failed. Controller fault.
LQ_CRC_ERR.. SCSI chip indicated CRC error during LQ-nexus IU transfer.
5-3
MSG_OUT_PARITY_ERR.. SCSI chip indicated parity error during message-out phase transfer.
Member Disk#X is Warning! The disk remapped entries of the Array member X has reached the threshold. The scrubbing procedure is canceled.
NVRAM :0xX Error! NVRAM testing failure at address X
NVRAM TEST FAIL NVRAM diagnostic test failed. Controller fault.
Overwrite Fail: DevID=0xX, BlkNo=0xY
Fail to overwrite parity while scurbbing a RAID6/TP Array
Pci SErr Assert[0xX]: 0xY PCI bus error code reported by SATA chip
Pci SErr Cause [0xX]: 0xY PCI bus error code reported by SATA chip
Param checksum ERROR! NVRAM superdata checksum error. This can be resulted from firware upgrade or NVRAM malfunctioned.
PCI BUS Error Total count:X Accumulation count of PCI bus error reported by SATA chip is X
PROTOCOL_ERR.. SCSI chip indicated protocol error found. This often occurs due to the previous signal quality issues.
Parity Overwrite Fail:DevID=0xX,BlkNo=0xY
Failed to overwrite parity while scurbbing a RAID5/RAID3/RAID50/RAID30 Array
Power module1: Fail Power module 1 of the chassis failed
Power module2: Fail Power module 2 of the chassis failed
PortX: issue PortFact failed SCSI chip initialization step failed. Controller fault.
PopulateReplyFreeFIFO failed!
SCSI chip initialization step failed. Controller fault.
RAID30/RAID50 Init ERROR! RAID30/RAID50 Array background initialization failed.
R6 X: Error!!More than 2 errors
More than 2 errors were found on a RAID6 Array. Failed to rebuild the Array.
RAID3/RAID5 Init ERROR! RAID3/RAID5 Array background initialization failed.
RPG ERROR XOR engine reported error. Controller fault.
ReceiveDataViaHandshake Fail1
SCSI chip initialization step failed. Controller fault.
ReceiveDataViaHandshake Fail2
SCSI chip initialization step failed. Controller fault.
5-4
ReceiveDataViaHandshake Fail3
SCSI chip initialization step failed. Controller fault.
ReceiveDataViaHandshake Fail4
SCSI chip initialization step failed. Controller fault.
SATA Chip X Pci Err ! main_int=0xY
PCI bus error detected by SATA chip X with interrupt status Y.
Scrub: I/O Error, Skip Row X Failed on scrubbing a RAID6/TP Array due to more than 2/3 errors found.
SATA Chip X failed: Y Z SATA chip X initialization failed. Controller fault.
SendMessageViaHandShake Fail1
SCSI chip initialization step failed. Controller fault.
SendMessageViaHandShake Fail2
SCSI chip initialization step failed. Controller fault.
SendMessageViaHandShake Fail3
SCSI chip initialization step failed. Controller fault.
Send_HandShake_Request Fail1
SCSI chip handshake I/O failed. Wait system retrial.
Send_HandShake_Request Fail2
SCSI chip handshake I/O failed. Wait system retrial.
Send_HandShake_Request Fail3
SCSI chip handshake I/O failed. Wait system retrial.
Send_HandShake_Request Fail4
SCSI chip handshake I/O failed. Wait system retrial.
TP X: Y Errors!!More than 3 errors
More than 3 errors were found on a TP Array. Failed to rebuild the Array.
Warning: Source disk X error! Failed reading source disk X during clone-process
Warning: Target disk X error! Failed writing target disk X during clone-process
Warning:The start sector is incorrect!
Magic: the cloning block address is out of range
Warning: DiskX's remap area is full!
Remap area of the cloning target disk X was full. The disk was offlined.
5-5
2. Event Severity: Warning
Array X:Found Y Parity Errors, Overwrite Parity
Y parity errors were found on Array X during scrubbing. Overwrite parity automatically.
Array X:Found Y Parity Errors,Overwrite Parity-NONE
Y parity errors were found on Array X during scrubbing. Skip writing parity.
Disk X ERROR: Y Block 0xZ Disk X read/write test error at block Z using test mode Y
Disk#X SMART Enable Fail! Failed to enable SMART function of disk X
Disk#X SMART Disable Fail! Failed to disable SMART function of disk X
DISK#X DST Fail! Disk X DST(Disk Self Test) failed
Disk#X: DST Completed, unknown failure, FAIL
Disk X DST(Disk Self Test) completed with unknown error.
Disk#X: DST Completed with Electrical failure, FAIL
Disk X DST(Disk Self Test) completed with electrical failure.
Disk#X: DST Completed with Servo failure, FAIL
Disk X DST(Disk Self Test) completed with servo failure.
Disk#X: DST Completed with Read failure, FAIL
Disk X DST(Disk Self Test) completed with read failure.
Disk#X: DST Completed with handling failure, FAIL
Disk X DST(Disk Self Test) completed with handling failure.
ERROR: Disk not support LBA48 addressing!
Some disk does not support 48-bit LBA and current stripe size is over 256 sectors. Please replace the disk or set the stripe size lower or equal to 256 sectors.
Error occurs when zeroing disk X!
Failed to zero disk X for cloning process
Gateway IP Set Error Wrong Gateway IP address format. The legal format is xxx.xxx.xxx.xxx where xxx is a decimal value from 0 to 255
IDE_ISR_1(X): status 0xY, error: Z !!
Error found on disk X with status Y, error code Z.
IDE_ISR_2(X): status 0xY, error: Z !!
Error found on disk X with status Y, error code Z.
Input(X) error, LUN # must be 0 ~ 127.
Please number the LUN from 0 to 127
5-6
IP Address Set Error Wrong IP address format. The legal format is xxx.xxx.xxx.xxx where xxx is a decimal value from 0 to 255
Input(X) error, Host # must be 1 ~ 32.
Please number the host computer from 1 to 32
Modem timeout ! Modem operation timeout. The ongoing faxing or paging operation would try again.
Param vender ID ERROR! Vendor ID in NVRAM mismatch. It is happened on first time system startup.
Parity ERROR:blk 0xX !! Parity error at block X when parity check is in process
Parity P check error, RowBlkNo=X
Error found on Parity P when parity check is in process
Parity Q check error, RowBlkNo=X
Error found on Parity Q when parity check is in process
Parity R check error, RowBlkNo=X
Error found on Parity R when parity check is in process
Parity ERROR Disk#X Blk: 0xY !!
RAID3/RAID5 Array srcubbing error on disk X at block Y
RTC Parameters Error!! Wrong date-time format. The legal format is xx/xx/xx x xx:xx where x is a decimal value from 0 to 9.
RTC Parameters Month Error!!
Wrong Month format. The legal format is mm/xx/xx x xx:xx where mm is a decimal value from 1 to 12.
RTC Parameters Day Error!! Wrong Day format or violating the perpetual calendar. The legal format is xx/dd/xx x xx:xx where dd is a decimal value from 1 to 31.
RTC Parameters Year Error!! Wrong Year format. The legal format is xx/xx/yy x xx:xx where yy is a decimal value from 1 to 100.
RTC day of week Error!!
Wrong Day of Week format or violating the perpetual calendar. The legal format is xx/xx/xx w xx:xx where w is a decimal value from 1 to 7. They stand for Monday to Sunday by increasing order respectively.
RTC Parameters Hour Error!! Wrong Hour format. The legal format is xx/xx/xx x hh:xx where hh is a decimal value from 1 to 24.
RTC Parameters Minute Error!!
Wrong Minute format. The legal format is xx/xx/xx x xx:mm where mm is a decimal value from 1 to 60.
5-7
RAID30/RAID50 check ERROR!
RAID30/RAID50 Array parity check failed
RAID6/RAID TP CHECK ERROR, RAID=X
Parity checked error on the RAID6/TP Array X.
RAID3/RAID5 check ERROR! RAID3/RAID5 Array parity check failed
Subnet Mask Set Error Wrong subnet mask format. XXX is a decimal value from 0 to 255.
Timeout: X, Lostint: Y IDE timer expired. This reveals the accumulated timeout couts and lost interruption count.
The IDE Timeout value must between 1 and 60 seconds!!
The specified IDE command timeout value should be from 1 to 60 seconds.
Warning RTC Not Working!! The RTC (Real Time Clock) was not started. Please start it by setting correct time.
Part 2
i
RRAAIIDD GGUUII
The RAID GUI can be used to remotely monitor the RAID controller. User who
wish to make use of the controller monitoring capabilities of the RAID GUI simply need
to connect to the browser with JRE (version 5.0 or later).
Using this section Part 1: RAID GUI is intended to be read in a linear manner. Users may prefer to skip
more familiar sections, but each of the steps below must be completed.
Setup: Manually set up your IP address from the control
panel.
Access: Learn how to access RAID GUI from the Internet.
Monitor: Familiarize yourself with the real-time monitoring
capabilities of RAID GUI.
To access RAID GUI from the Internet require a console with JRE version
5.0 or later.
Do not set up your controller using the GUI and Terminal at the same time.
Doing so may cause system failure.
-1-1-
CChhaapptteerr 11
SSSeeettttttiiinnnggg uuuppp RRRAAAIIIDDD GGGUUUIII
This chapter introduces how to set up RAID GUI.
You will find:
� How to set up the controller and your console.
� How to access the RAID GUI from the Internet.
Setting up the controller
Before you can monitor your controller from the browser by using RAID GUI, you need
to set up your network configuration through control panel.
1. To connect the two devices, you need to
at least enter you IP address manually
from your control panel.
2. Turn on SA-6640S. The controller will first
enter Self-Diagnostic Mode and then
enter Operation Mode. A typical
Operation Mode screen is shown.
3. Press Enter to enter Configuration Mode.
You will be prompted to enter the
password. The default password is 0000
(four zeros). To enter this, simply press
Enter eight times. Press Enter again to
submit.
4. Go to System Params > Ethernet
Setup. If you select to disable DHCP, you
may go on entering your IP address.
Also you may select to enter other
information such as Netmask, Gateway
and Mac Address.
5. You will need the Java Runtime
Environment to access the RAID GUI.
Download it from www.java.com.
SA-6640S
1111111111111111
Enter Passwd
0000
-1-2-
Accessing the RAID GUI
1. Open any browser starting page and enter the IP address into the Address bar.
Java Runtime Environment (JRE) must be installed for RAID GUI to be
successfully displayed on a browser. If JRE is not already installed, the
application can be downloaded free from http://java.sun.com/downloades.
2. When you are connecting, the RAID GUI webpage automatically appears on the screen.
3. The dialog box that appears on the screen shows the connection state and if there are any errors. Do not close the dialog box; otherwise, the GUI will not open.
-1-3-
4. When the connection is
made, the monitor mode page is shown. To access Config. mode login using the default password of 0000 (4 zeros).
-2-1-
CChhaapptteerr 22
MMMooonnniiitttooorrriiinnnggg wwwiiittthhh RRRAAAIIIDDD GGGUUUIII
The RAID GUI browser allows you to remotely monitor the status of your RAID as it initializes and operates in real time.
This chapter introduces you to the RAID GUIs monitoring capabilities.
RAID GUI overview
RAID GUI monitors the status of your RAID controller(s) through an Ethernet connection.
The RAID GUI browser window first displays the Monitor Mode. This is the mode you
will be viewing before you login to set up the Config. mode.
MONITOR MODE
Controller Information
-2-2-
The following colors and codes are used to indicate the status of the hardware and
Login:
User Login
To enter Config. mode (Configuration mode), you first need to log in. Enter your
password (default is 0000 (4 zeros)) and press Login. Press the Config button to enter
configuration mode.
Green Normal Connection Red No Connection
Controller Information
The controller provides information for Host Channel 1 and 2. In addition, your IP
address, model name, memory and firmware version will be displayed. You may click
on the icons to see the system status such as fan, power, system temperature, etc.
Click on the Host Chan 1 or Host Chan 2 tab to see the details. If there are any errors
the following colors are used.
Green Working Red No Working
Chassis Information
This section displays the chassis information. I.e. the number of inserted disks and
which array (if any) they are in.
Click on a drive to display the following
information:
Model No
Disk size in MB
Slot No
Status
Number of bad sectors
Which Serial interface is being used
Disks Firmware version
Current Temperature
-2-3-
Green Disk Online J JBOD (Just a Bunch Of
Disks)
Dark Green Disk is Rebuilding, all
disks are expanding
A Rebuilding and expanding
Gray Disk is Offline S Spare
Yellow Disk is Cloned C Clone
Dark yellow Disk is Cloning X Offline
Purple Disk is Self Testing R Array member but is offline
Array State
This section displays the array status. The following colors are used to indicate the
status of array.
Deep Red Array has Failed Purple Array is Checking
Dark Green Array Initialized Blue Array is Expanding
Green Array Exists Brown Array is Scrubbing
Yellow Green Array is Rebuilding Gray Array does Not Exist
Event log
This table displays a list of all events.
RAID GUI has two Event logs. In Monitor Made all application events are
logged e.g. User login and all changes to the arrays In Config. Mode
controller events are logged e.g. system checks and shutdowns.
-2-4-
CONFIG MODE
Before configuring any settings in Config. Mode, you need to login by entering your
password (the default password is 0000, (4 zeros)). Once you have logged in, press the
Config. button and you will see the following browser page:
-2-5-
1. Quick Create Array
This feature allows you to create an array(s) easily.
� Slice Setting
This feature allows you to configure your slice settings.
If your slice is over 2TB, be sure to activate Enable Over 2TB. Then you may select
between Variable Sector Size (1KB per Sector (4TB) is the default value) and 16 Byte
CDB (64bit LBA mode). The later is the standard method for slice over 2TB.
Sectors per Track
Select either 128 or 255. Users of the Solaris X86 operating system should select 255 if
LUNS are between 512GB and 1024GB.
-2-6-
� Quick Setup
Use the quick setup function to configure the RAID controller.
Array setting Select an array to set the array details: Array Level (Default: 0),
Stripe Size (Default: 128). Displayed array information includes
Available Drives, Select Drives, and Array Capacity (MB).
Chassis setting After you Select All Chassis, or select hard disks for Chassis 0,
press Confirm to execute the setting.
Slice setting After you select Slice Number, Alignment Offset, Mapping
Channel, IDs and LUNs, press Apply. Slice Capacity will
display the total RAID number. Press Save and Process to go
back to Monitor Mode. Note there will be a bar indicating how
many percent of the process has been completed.
-2-7-
Quick Setup Process to create an array.
1. Select the number of the array you want to create.
2. Choose the array level from the drop down menu. (the array levels available
vary depending on the number of disks available).
3. Select the required disks (White = unused, Yellow = used, blue = selected but
unused).
4. Press Confirm button (if the number of disks does not match the chosen array
level an error will show).
5. Either accept the default settings or use the drop down menus to change the
Slice Number, Slice Capacity, Alignment Offset, Mapping Channel, Loop
ID and LUNs. Press Apply.
6. Press Delete to remove the slice (if more than one, the bottom one will be
removed first).
7. Press Save & Process to confirm the process and return to Monitor Mode.
When you create multiple arrays, press Confirm when you have selected
HDDs for each array.
-2-8-
2. Array Utilities
This feature allows you to delete and modify your array settings.
� Delete Array
This feature allows you to delete arrays.
Select an array and the array information such as Status, Array Level, Stripe Size,
Capacity and Disks are shown in order. You may also refer to the slice information in
the green box. Select Delete Array and click on Save and Process to delete the
chosen array.
When you arrays fails and it is necessary to creat a new array, delete the
failed array and create a new one.
-2-9-
� Modify Array
This feature allows you to modify the properties of existing arrays.
Select an array and the array information such as Status, Array Level, Stripe Size,
Capacity and Disks are shown in order. The following actions can be performed from
this window:
Delete a slice Check the box marked Del to delete a slice. A window will be
shown asking for confirmation.
Offset If available choose from None, File System or User Define
(Default is None).
Change Slice Amend the figure in the ‘Slice capacity’ window. The
amount
capacity you can increase by is shown in the ‘Free Capacity’ window
at the bottom of the screen. If you decrease capacity any
Free Capacity will be shown.
Change the LUNs Change the identifying number using the drop down menu.
Add Slices If there is any free capacity, additional slices can be added
by pressing the Add Slices button.
To finalize all modifications press the Save and Process button.
-2-10-
� Expand Array
This feature allows you to expand the arrays (only one array at a time).
You may only select to increase the number of hard disks but not to change
the array setting.
Once you’ve confirmed your action, wait until the expansion process is completely
finished. Do not change or select any function during the expansion process.
Once expansion has completed press Save and Process to proceed.
-2-11-
� Scrubbing Array
Disk scrubbing maybe needed to correct any synchronization errors that have
occurred due to parity byte errors.
Overwrite Parity Select to execute overwrite parity.
Scrubbing setting Select an array or all arrays. Choose Manual Scrubbing or
Schedule Scrubbing and press Confirm button.
Result Report This displays array details.
-2-12-
� Disk Self Test
Under this menu, you can select to View DST information. Or by selecting either
Short Self Test or Extended Self Test you can select a disk to test, press Save
and Process to begin the test. Choose Stop Test and check the required disk and
Save and Process to stop the disk self test.
-2-13-
� Disk Clone
This feature allows you to clone disks, creating duplicates of existing disk
configurations.
Clone mode includes the following five options:
View Clone Info. See the clone details: Slot, Model Name, Size, State and
Percent.
Clone Only This feature is for clone action only. Select Clone Only and
then the system asks you to select the Source Disk and
Target Disk. Click Save and Process to start cloning.
Swap after Clone This feature allows user to swap disks after cloning. Select
Swap after Clone and then the system asks you to select
Source Disk and Target Disk. Click Save and Process,
the system will start cloning. After cloning, the system will
automatically swap disks.
Stop or Cancel During the process, select Stop or Cancel to terminate the
action.
Replace This feature allows you to replace disks.
-2-14-
� SMART
This feature configures your system for self-monitoring, analysis, and reporting.
SMART Parameters There are four options under SMART Mode: Disable, Enable
(Alert Only), Enable (Permanent Clone), and Enable (Swap
after Clone). Once you enable SMART mode, be sure to specify
Disk Check Time among 60 Min, 30 Min, 15 Min, and 1 Min.
Bad Blocks
Parameters
You may specify the status and value for Threshold for Clone
(Disable, 30, 80, 130, and 180 Blocks), and Threshold for
SWAP (Disable, 50, 100, 150, and 200 Blocks).
-2-15-
� Test SMART
This test SMART button allows you to check the status of any of the installed disk.
Check the Select All Disks box to confirm the disk status of all installed disks.
Alternatively check the individual disks to be confirmed.
-2-16-
3. Configuration
This feature allows you to configure more of the controllers’ settings.
� SanMask Params
SAN Masking allows the administrator to specify which hosts are able to see the
LUNs. The SAN Mask tool differentiates the Fiber networks based on the
unique Worldwide Port Name (WWPN) of each Fiber card.
WWN Table
WWN Table : Set up the WWN Table before you can check the World Wide
Name. Choose the Host number from the drop down menu;
add the World Wide Name and Nick Name.
Host Chan l/2 : Select to set up LUN of host channel 1 and 2 and click Apply.
-2-17-
� Host Channels
The Host Channels setting allows you to choose the settings for each of the host
channels.
Host Choose the available host from the drop down menu.
Lun Choose the available LUN from the drop down menu.
Click Apply to apply the Host settings before changing to Host
channel 2.
Click Save and Process to apply the settings or press Delete to
remove the host.
-2-18-
� Cache Params
The Cache Params setting allows you to choose the configuration of both the Write
Cache and the Disk Cache.
Write Cache: Select to Write Back, Write Through or Auto (Default). If Auto is
selected, the system operates in Write Back mode or Write
Through mode when the enclosure is in normal state or critical
state respectively.
Disk Cache: Select to Enable (Default) or Disable. It is suggested that you
select Enable for general situations and select Disable when
your system is connected to BBM.
The unit will enter a critical state if any of the following happen: (1) Power
failure; (2) Fan failure; (3) Abnormal temperature detected;(4) Abnormal
voltage detected; (5) Low battery charge if BBM is installed.
-2-19-
� Host Params
The Host Params setting allows you to choose the individual settings for Host
Channels 1 and 2.
Host Chan 1 & 2: View the following details of host channels: Data Rate (Auto),
Connect Mode (Arbitration Loop, or Point to Point), Auto ID
(Enable or Disable) and Loop ID (when Disabled, set ID from
0-125). Loop ID is only displayed if Auto ID is disabled.
-2-20-
� Comm Params
The Comm Parameters setting allows you to choose the connection details for
both the terminal and network.
Terminal Params: Set the following details: Baud Rate, Stop Bit, Data Bit, and
Parity. The default value for the above four items are
115200, 1, 8, and None respectively.
Network Params: Set the following details: DHCP, IP Address, Subnet Mask,
Gateway IP Address and DNS IP Address.
If you change the default value, you have to refresh your connection to the
GUI webpage.
-2-21-
� System Params
The System Params settings allow you to configure basic settings such as time,
beeper and password.
Beeper Enable or Disable the beeper.
RTC Setting Set to display the real time.
Time Zone Select your current time zone from the drop down list.
Change the password: Enter Original Password, Enter New Change Password
Password, and Re-Enter New Password.
Disk standby timer Configuration -> System Params ->Disk standby timer
Option items: Disable (default), 5 mins, 10 mins, 15 mins, 30
mins, 60 mins, 90 mins and 120 mins.
Restriction: firmware will reject "Disk Standby Timer" operations
under below conditions.
a. AV Streaming function be enabled.
b. Disk timeout value less than 7 seconds.
-2-22-
� Notify Params
In the event of a system error the RAID GUI will notify a specified person or
persons. The configuration details of where to send these notifications are
shown below.
SMTP Setting
Gap Time Choose the frequency of the e-mail notifications from the drop down menu.
Description of the machine
Enter the name of the RAID server.
SMTP Server From the drop down menu choose to either enter the SMTP details in either IP address or domain name formats.
Sender E-Mail/ Password
Enter the senders e-mail address and password.
Alternate SMTP server, e-mail and password
Enter secondary details in case the first doesn’t work.
Receiver 1, 2, 3 Enter up to 3 e-mail addresses to receive e-mail notifications.
Send test mail Check this box to send a test e-mail to the receiver e-mail address (es).
Press Save and Process to apply all the changes and return to Monitor Mode.
-2-23-
SNMP Setting
Enable/Disable
SNMP
Check this box to enable or disable the SNMP settings.
Host Name Enter the Host name of your SNMP server.
SNMP IP Address Enter the IP address of the SNMP server.
UDP Port The port which the SNMP server listens to. The RAID system
sends SNMP traps to this port. (Default port number is 162)
Community Name This is the name that the server uses for authentication.
SNMP Version Choose between Version 1 or Version 2. Default Version 1)
The SNMP version on the controller must match that of the SNMP server.
Press Save and Process to apply all the changes and return to Monitor Mode.
-2-24-
4. System Functions
� Shutdown
This feature shows system’s current State and allows you to ShutDown or
Restart your system.
-2-25-
� Update F/W
This feature allows you to update the system’s firmware and boot cache.
Before updating the software make sure that all anti-Pop Up software is
turned off.
Updating the System Firmware (F/W)
1. To update the System F/w click the Update System F/W button. It will be green
when selected.
2. Click Update Start button. A dialogue box pops out asks you to confirm your update
action, and click Yes.
3. A new browser window will automatically open.
-2-26-
Updating the Boot Cache
1. To update the Boot Cache click the Update Boot Cache button. It will be green
when selected.
2. Click Update Start button. A dialogue box pops out asks you to confirm your update
action, and click Yes.
3. A new browser window will automatically open.
2. RAID GUI update the firmware.
3. The RAID GUI will display the new firmware version.
-2-27-
1. Search for the desired application and click Submit to update. Upon your click, the
screen returns to the UI, showing the present updating status.
During the updating process, do not shut the host system.
2. When the system finishes updating, there will be a notification message.
3. Shut down and restart the controller to run the new firmware.
Restart the RAID GUI updating the firmware.
-2-28-
5. Information
� Disk info
The table below displays the chassis number and the status of each of the
disks.
Slot The chassis slot the disk is in.
Model
Name The name of the disk.
Size The size of the disk in MB/GB.
Interface Which interface is used SATA I or SATA II.
LBA48 If LBA48 is supported “support” is shown. An X is shown if it is not
supported.
SMART If SMART is supported “support” is shown. An X is shown if it is not
supported.
Bad Sector The number of bad sectors on the disk.
DST If DST is supported by the disk. An X is shown if it is not supported.
Status The status of the disk Healthy or Unhealthy.
-2-29-
� Array Info
This page displays the configuration details of the currently selected Array.
Select an array to see the detailed information such as Status, Array Level, Stripe
Size, Capacity, Disks, and status of Write Cache and Disk Cache. The Slice
information is shown in the right window.
Slice information
Details of the configured slices are shown in the right window. The following
information is shown: Size (MBs), Offset status, Channel Number, ID Number,
LUN number.
-2-30-
� System Info
This page displays the configuration details of the system.
This displays the Controller Information and Battery Backup Module
Information. Controller information includes Firmware version, Serial
Number, CPU Type, Installed Memory, and FC Chip. The Battery Backup
Module Information includes Temperature, Capacity, Status, Serial Number,
and Device Chemistry.
-2-31-
� Event log
This page displays a chronological list of all hardware events that have occurred
with the device.
Erase Press this button to delete all of the event logs. A confirmation
window will appear. Click OK to continue or Cancel to return to
the screen.
Reload Press this button to update the event log.
Mail Press this button to e-mail the receivers specified in the ‘Notify
Parameters’ configuration section.
-I-
Appendix
Upgrading Firmware of RAID System
Pre-configured RAID parameters
On-line/Off-line effective RAID parameters
Recording of RAID Configuration
Upgrading Firmware of the RAID System
The firmware of RAID system can be upgraded via LAN port. Contact Technical
support team for the latest RAID firmware.
Note: Group F/W upgrade will be supported soon.
Step1. Use a serial cable to connect the RS-232 port of the RAID system and the
management console (host computer)
Repeat the same step in Section 3.1.1 – Using the RS-232 serial port to set the
RS-232 serial parameters and start the Hyper Terminal program.
Make sure the connection is linked and logged into the Monitor utility interface.
Step2. Use a RJ-45 cable and set up a connection to the LAN port of the RAID
system and the management console via an ethernet switch
Ex : IP address of console: 10.10.4.85 (FTP server)
IP address of RAID system: 10.10.4.88
2.1 Connect to the RAID system from your management console via an
Ethernet switch
2.2 Set up a FTP server on your console and assign the firmware file path.
For example: Use a freeware FTP server, Tftp32, and select the directory
where the latest firmware is located.
-II-
Step3. Login to the Boot Utility Menu of RAID system via terminal program
Reboot the RAID system and while RAID system is self-test, press <Ctrl>+<B>
to enter RAID’s Boot utility interface via Hyper Terminal program.
-III-
Step4.Set IP address of the console and RAID system
In “Boot Utility Menu”, press<N> to set IP address for the console and RAID
system
For example: Local IP address: 10.10.4.88(RAID system)
Server IP address: 10.10.4.85(Console)
Step5. Load firmware image file to the RAID system
Press<L> and then the file name prompt. Type the firmware file name and press
Enter to load image file from the console(FTP server) to RAID system cache.
Note: Make sure the firmware file name and path is assigned and correct.
-IV-
Step6. Update firmware to system ROM
Press<S> to update firmware to RAID controller ROM. And press<Y> to start
writing new system firmware to ROM.
Note: There are dual Flash ROMs, a main ROM and a backup ROM on the
RAID controller. This helps the RAID controller recover from any issues
that may happen during the firmware upgrade or single ROM failure.
Step7. Reboot RAID system
Once the firmware has updated to Flash ROM 1 and ROM 2, Press <R> to
reboot the RAID system. After reboot, the RAID system firmware will be
updated.
-V-
Pre-configured RAID parameters
The following table lists RAID parameters that should be determined at the initial stage of RAID system configuration.
1. Quick Setup
Parameter Default Setting Alternative
RAID Level 5 0/1/3/3+spare/5+spare/6/6+spare/TP/TP+spare/0+
1/30/50/NRAID
2. Array Params
Parameter Default Setting Alternative
RAID Level 5 0/1/3/3+spare/5+spare/6/6+spare/TP/TP+spare/0+
1/30/50/NRAID
Slice Slice 00 (max
2TB) Slice over 2 TB
Initialization Mode Foreground Background
Stripe Size 128 sector
(64KB) 8/16/32/64/256/512/1024 (sectors)
Sector Size
(Slice Over 2 TB) 1024(4TB) Byte
2048 (8TB) Byte
4096(16TB) Byte
16 byte CDB
(Slice Over 2 TB) 16 byte CDB
Sector per Track 255 128
3. SCSI Params ( For SCSI host interface)
Parameter Default Setting Alternative
Set SCSI ID 0 1~15
QAS Enable Disable
4. Fibre Params (For Fibre host interface)
Parameter Default Setting Alternative
Set Loop ID Auto Manually
5. System Params
Parameter Default Setting Alternative
Ethernet Setup DHCP enabled Manually set IP/Netmask/Gateway
Set RTC MM/DD/YY W
HH:MM
-VI-
On-line and Off-line effective RAID parameters
Type Description RAID Parameter
I
RAID parameters
that need to save
to NVRAM and
reset RAID system
to take effect
1. SCSI Params->SCSI CH1/Ch2->Set SCSI ID, Speed, Wide,
LUN Map, QAS
II
RAID parameters
that only need to
save to NVRAM to
take effect
1. Array Params->Array x->RAID Level
2. Array Params->Array x->Slice
3. Array Params->Write Cache
4. Array Params->PreRead Setup->Max ReadLog, Max
PreRead
5. Array Params->Slice Over 2TB
6. Array Params->Sector per Track
7. Fibre Params->FC CH1/CH2->Set Loop ID, Connect
Mode, Set Data Rate, LUN Map
8. Fibre Params->FC CH1/CH2->SAN Mask
9. System Params->RS232 Params
10. System Params->Passwd Setup
11. System Params->Ethernet Setup->DHCP, IP Address,
Netmask, Gateway
12. System Params->Beeper
III
RAID parameters
that take effect
directly without
saving to NVRAM
or resetting RAID
system
1. Main Menu->Quick Setup
2. Array Params->Expand Array->Array x->Select Disk
Number
3. System Params->RTC->Set RTC
4. System Params->Init Parity
5. System Params->Parity Check
6. Utility->System Utility->Disk scrubbing
7. Utility->Disk Utility->Disk Self Test
8. Utility->Disk Utility->Disk Clone
9. Utility->Disk Utility->SMART
10. Main Menu-Shutdown
-VII-
Recording of RAID Configuration
System Information
Product Model Name
Firmware Version
Serial Number
Installed Memory (MB)
Hard Drive Information ( Vendor/Model)
HDD 1 HDD 9
HDD 2 HDD 10
HDD 3 HDD 11
HDD 4 HDD 12
HDD 5 HDD 13
HDD 6 HDD 14
HDD 7 HDD 15
HDD 8 HDD 16
Ethernet Information
IP Address
Netmask
Gateway
Mac Address
Array Groups Information
Stripe Size (KB)
Write Cache (Auto/Enable/Disable)
-VIII-
Array
Group
(1~8)
RAID
Level
Slice
(0~15)
Capacity
(GB)
Hot Spare
(Yes/No)
RAID Member
SCSI Channel Information
SCSI Channel
(CH1/CH2)
SCSI ID
(0~15)
Speed
(Ultra x)
Wide
(Enable/Disable)
QAS
(Enable/Disable)
Fibre Channel Information
Fibre Channel
(CH1/CH2)
Loop ID
(Auto/Manual)
Connection
Mode
(FC-AL/Pt-to-Pt)
Date Rate
(1Gb/2Gb/Auto)
-IX-
LUN Mapping
SCSI/FC
Channel
(CH1/CH2)
LUN #
(0~127)
Array #
(1~8)
Slice #
(00~15)
Capacity
(GB)
-X-
Customer feedback and contacting Maxtronic technical
support
Fax your comments to: Maxtronic Incorporation
Technical Support Team
Fax: +886-2-22184896
Please let us know how you rate this document: JanusRAID Generic Software Manual
Place a check mark in the following table
Item Excellent Good Average Fair Poor
1. Technical content
2. Clarity of information
3. Completeness of Information
4. Ease of finding Information
5. Usefulness of
Examples/Figures
6. Overall manual
What can we do to improve the document?
If there are any errors in this document, please list the error and page number.
-XI-
Contacting Maxtronic Technical Support
If you have technical questions about this product that are not answered in this document, please contact our support team:
Email: [email protected]
TEL : +886-2-22184875
MSN: [email protected]
Service hours: AM 09:00~PM 18:00 (GMT+8, Taiwan/Taipei)
-2-1-