hitachi data ingestor to cloud onramp

25
MK-93MNGSV003-00 Hitachi Cloud Service for Content Archiving On-Ramps Guide for Hitachi Data Ingestor

Upload: michal-kozelek

Post on 20-Oct-2015

48 views

Category:

Documents


1 download

TRANSCRIPT

  • MK-93MNGSV003-00

    Hitachi Cloud Service for Content Archiving On-Ramps Guide for Hitachi Data Ingestor

  • HCS-CA On-Ramps Guide for HDI Page ii

    October 2013 Hitachi Data Systems

    Notices and Disclaimer Copyright 2013 Hitachi Data Systems Corporation. All rights reserved.

    2011-2013 Hitachi, Ltd. All rights reserved.

    No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Hitachi, Ltd.

    Hitachi, Ltd., reserves the right to make changes to this document at any time without notice and assumes no responsibility for its use. This document contains the most current information available at the time of publication. When new or revised information becomes available, this entire document will be updated and distributed to all registered users.

    Some of the features described in this document might not be currently available. Refer to the most recent product announcement for information about feature and product availability, or contact Hitachi Data Systems Corporation at https://portal.hds.com.

    Notice: Hitachi, Ltd., products and services can be ordered only under the terms and conditions of the applicable Hitachi Data Systems Corporation agreements. The use of Hitachi, Ltd., products is governed by the terms of your agreements with Hitachi Data Systems Corporation.

    Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd., in the United States and other countries.

    All other trademarks, service marks, and company names in this document or website are properties of their respective owners.

  • HCS-CA On-Ramps Guide for HDI Page iii

    October 2013 Hitachi Data Systems

    Document Revision Level

    Revision Date Description

    Version 0 7/19/13 Document created by GSS FCS Services Engineering

    Version 1 9/6/13 First review draft

    Version 2 11/5/13 First RC version

    Contact Hitachi Data Systems 2845 Lafayette Street Santa Clara, California 95050-2627 https://portal.hds.com

    North America: 1-800-446-0744

  • HCS-CA On-Ramps Guide for HDI

    October 2013 Hitachi Data Systems

    Table of Contents

    Introduction ................................................................................................................................................. 1

    Overview ...................................................................................................................................................... 2

    Contacting Hitachi Data Systems Support ............................................................................................ 2

    Accessing the Hitachi Cloud Service Through On-Ramps ..................................................................... 2

    Constructing the Namespace URL Access Point.................................................................................. 3 Accessing a Replicated Namespace on the Secondary Datacenter .................................................... 4

    MAPI Support and Workarounds ............................................................................................................... 4

    MAPI Workaround: Adding Quotas ....................................................................................................... 4 MAPI Workaround: Editing the File System .......................................................................................... 4 MAPI Workaround: Expanding the File System .................................................................................... 5

    Configuring Hitachi Data Ingestor for HCS-CA ........................................................................................ 6

    HDI Migration to HCS-CA Overview ..................................................................................................... 6 Configuring the Tenant .......................................................................................................................... 6 Creating the File System ....................................................................................................................... 7 Configuring Migration to HCS-CA ......................................................................................................... 8 Configuring a Migration Policy ............................................................................................................ 12 Specifying the Stub Threshold ............................................................................................................ 14 Monitoring Namespace Hard Quotas .................................................................................................. 15 Monitoring Tenant Hard Quotas .......................................................................................................... 15 Setting Up Quota Information.............................................................................................................. 16 Recovering From Exceeding a Storage Quota Limit........................................................................... 17 Backing Up the HDI Configuration ...................................................................................................... 19

  • HCS-CA On-Ramps Guide for HDI Page 1

    October 2013 Hitachi Data Systems

    Introduction Hitachi Cloud Service for Content Archiving (HCS-CA) is an enterprise-level public and enterprise-to-cloud data storage solution offered by Hitachi Data Systems (HDS). HDS maintains your data remotely and securely using our hardware, software, and personnel, while you retain easy access to your data, regardless of location.

    HDS fully-managed cloud services include steps to:

    Provide an object storage solution that enables you to store, share, protect, preserve, analyze and retrieve file data from an enterprise-level cloud platform

    Move content from primary stores on-site into off-site cloud storage

    Store copies of files remotely in a secure facility, reducing backup

    Monitor consumption and availability

    Allow versioning on cloud-stored files

    The following figure illustrates the seamless integration of Hitachi Cloud Service for Content Archiving with three of the most common types of cloud on-ramps.

    Figure 1. Cloud On-Ramp Examples.

  • HCS-CA On-Ramps Guide for HDI Page 2

    October 2013 Hitachi Data Systems

    Although the process for configuring these on-ramps is generally straightforward, it is different for each type of archiving application. This guide covers several of the more common on-ramp applications likely to make use of HCS-CA.

    Overview

    Existing local file systems may have many large files that are accessed infrequently. These are good candidates for migration to the cloud. We refer to the creation and configuration of these cloud migration paths, which differ depending on the source system or architecture, as on-ramps.

    Contacting Hitachi Data Systems Support

    The Hitachi Data Systems customer support staff is available 24 hours a day, seven days a week. If you need technical support, log on to the Hitachi Data Systems Portal for contact information: https://portal.hds.com.

    Note: The HCS-CA subscription management interface has a tab labeled Help & Support that goes directly to the HDS support website.

    Accessing the Hitachi Cloud Service Through On-Ramps

    Accessing a provisioned Namespace and using a directory within it for data archiving is simple. The information you will need includes:

    The location, which is often a constructed URL

    The login name and password for one of that Namespaces authorized data users

    Additional details you will need to know include:

    The tenant name for the HCS-CA subscription

    The Namespace name

    The name of the archive destination directory in the Namespace

    The content location for your HCS-CA subscription, which will vary depending on which Hitachi datacenter you chose to host your account

    You can use the HCS-CA administrative interface at https://console.cloud.hds.com to collect the information for any given Namespace.

    The data access user login and password is found through the user list on the Data Access Users page, under the Content Archive tab.

    Note: Currently, there is no Namespace browser available, so you will need to collect this information manually. For more information, see the Hitachi Cloud Service for Content Archiving Users Guide.

  • HCS-CA On-Ramps Guide for HDI Page 3

    October 2013 Hitachi Data Systems

    Constructing the Namespace URL Access Point

    Constructing the Namespace URL is very straightforward. It consists of:

    Namespace name + . + Tenant name + . +Content Location

    (Omit the quotes and plus-signs.) Each element is separated by a period (.) character. Currently, there are two content locations:

    content.us-az1.cloud.hds.com/rest

    content.us-nj1.cloud.hds.com/rest

    The designations az1 or nj1 indicate whether the content location is at our Arizona or New Jersey datacenter. In addition, these are SSL encrypted connections, so the prefix https:// is used.

    Note: When additional Hitachi data centers are added and made available, new content location URLs will become available for each one.

    Therefore, a complete URL looks like this:

    https://namespace.tenant.content.us-az1.cloud.hds.com/rest

    or

    https://namespace.tenant.content.us-nj1.cloud.hds.com/rest

    For example, with a tenant named BindersBooks, a Namespace named sjalpha1, and using the Hitachi New Jersey datacenter as the primary site, the constructed URL for the first Namespace created would be:

    https://sjalpha1.BindersBooks.content.us-nj1.cloud.hds.com/rest

    And for the subsequent Namespaces, the URLs would be:

    https://sjbetanamespace2.BindersBooks.content.us-nj1.cloud.hds.com/rest

    https://sjgammnamespace3.BindersBooks.content.us-nj1.cloud.hds.com/rest

    The data access user credentials for accessing a particular would be:

    Username for the data access user

    The data access users password

    Note: With a replicated site HCS-CA subscription, in general you should direct your on-ramp requests to your primary hosted site. If there is any problem accessing the primary site, HCS-CA automatically retrieves data from the secondary replication site instead.

  • HCS-CA On-Ramps Guide for HDI Page 4

    October 2013 Hitachi Data Systems

    Accessing a Replicated Namespace on the Secondary Datacenter

    If you have a replicated HCS-CA subscription, normally you will always want to access your

    Namespaces using the credentials and constructed URL that go to your primary datacenter. (The

    primary datacenter is selected during the subscription selection process.)

    However, it is possible to access your data directly from the secondary datacenter in read-only

    mode using those same user/password credentials and a slightly modified Namespace URL. This

    is possible because not only is your data replicated, so are all the other details of your

    subscription, including configured Namespaces and authorized data users.

    For example, if https://namespace.tenant.content.us-nj1.cloud.hds.com/rest is the primary site,

    this allows full read-write access.

    If you were to use the URL https://namespace.tenant.content.us-az1.cloud.hds.com/rest, since

    this is the secondary replicated site, this allows read-only access.

    MAPI Support and Workarounds The Hitachi Data Ingestor (HDI) does not currently support MAPI for access to HCS-CA services.

    This results in some functionality limitations and requires a workaround to support migration to

    the Cloud from HDI.

    In general, MAPI calls are required for the following functions:

    Using the Setup Wizard and configuring the HCP

    Adding quotas when creating the Share/File System

    Editing the File System

    Expanding the File System

    Backing up to HCP

    MAPI Workaround: Adding Quotas

    To add quotas when creating the Share/File System:

    Use the HCP graphical user interface (GUI) to create the Namespace and add the quota size.

    Use the HDI command line interface (CLI) to add the HCP Tenant.

    In the HDI GUI, add the share but do not select Add Namespace/Quota.

    Still in the HCP GUI, add the migration policy. Manually enter the Namespace and set the quota size.

    Using the HDI CLI, set the file system quotas (hard and soft).

    MAPI Workaround: Editing the File System

    To edit the File System, use the HDI CLI to edit the file system. The CLI command is sudo

    fsedit. The online CLI help lists the available parameters as follows:

    service@node-0:~$ sudo fsedit -h

    KAQM14136-I Usage: fsedit [-w [-M max-retention] [-m min-retention]

    [-a on [-A auto-commit-period]

    [-D default-retention]]

  • HCS-CA On-Ramps Guide for HDI Page 5

    October 2013 Hitachi Data Systems

    [-R {allow|deny}]]

    [-L {allow|deny}]

    [--hcp-replica-host host-name]

    [[--hcp-account user-name]

    --hcp-password password]

    [--versioning {use|do_not_use}]

    [--period-to-hold period]

    [--compaction use]

    [-y]

    fsedit -h

    MAPI Workaround: Expanding the File System

    To expand the file system:

    Using HFSM, create and map a new LUN through SNM2 to all ports connected to the HDI cluster.

    Use the command sudo fxexpand

    You can use the commands:

    o sudo lumaplist to list all current LUNs

    o sudo lulist to list all current LU device files

    o sudo fslist to view file system details

    The following example code shows the basic steps to do this:

    service@node-0:~$ sudo lumaplist

    LUN Target Model serial LDEV( hex) type size

    00 N0-T000 HUS 91140136 1( 0001) SAS7K 2048.000GB

    01 N0-T000 HUS 91140136 2( 0002) SAS7K 2048.000GB

    02 N0-T000 HUS 91140136 0( 0000) SAS7K 500.000GB

    03 N0-T000 HUS 91140136 4( 0004) SAS7K 200.000GB

    04 N0-T000 HUS 91140136 5( 0005) SAS7K 100.000GB

    05 N0-T000 HUS 91140136 6( 0006) SAS7K 100.000GB

    service@node-0:~$ sudo lulist

    Device files for use:

    /dev/enas/lu00 SAS7K 2048.000GB

    /dev/enas/lu01 SAS7K 2048.000GB

    /dev/enas/lu02 SAS7K 500.000GB

    /dev/enas/lu05 SAS7K 100.000GB

  • HCS-CA On-Ramps Guide for HDI Page 6

    October 2013 Hitachi Data Systems

    service@node-0:~$ sudo fslist

    List of File Systems:

    The number of file systems(1)

    File system(used by) : cifs1

    Total disk capacity(GB) : 300.000

  • HCS-CA On-Ramps Guide for HDI Page 7

    October 2013 Hitachi Data Systems

    service@node-0:~$ sudo archcpset --host content.us-ca1.cloud.hds.com --

    tenant hdi --user-name hdi --password HDI.Data

    Creating the File System

    To set up migration to HCS-CA using the graphical user interface (GUI), the basic steps are as

    follows:

    Set up the File System

    Share the File System (without any reference to a Namespace)

    Use the Migration Wizard to create the link from the File System to HCS-CA.

    The specific steps are as follows:

    Navigate to Processing node (HDI1) > Node Name (node-0).

    Click the Create and Share File System button at the bottom of the page.

    Figure 2. HDI Create and Share File System function.

    From the Create and Share File System page:

    Enter a share name. This will be used for the File System name.

    Deselect the Use Namespace checkbox. The Namespace will be configured during the migration set up.

    In the Capacity section, to limit the capacity of the LU to be used (optional), select File System LU Size and specify the capacity.

    Select the LU to be used to create the File System by clicking the Add File System LUs button.

    When finished, at the bottom of the page, click OK

  • HCS-CA On-Ramps Guide for HDI Page 8

    October 2013 Hitachi Data Systems

    Figure 3. HDI Create and Share File System page.

    Check to make sure the settings are correct, then click the Confirm button at the bottom of the

    page.

    Figure 4. HDI Create and Share File System confirmation.

    Configuring Migration to HCS-CA

    To create the migration link between the File System and HCS-CA, in the GUI menu, click GO,

    then select the Migration Wizard from the drop-down list.

  • HCS-CA On-Ramps Guide for HDI Page 9

    October 2013 Hitachi Data Systems

    Figure 5. Starting the HDI Migration Wizard.

    Select the node and click OK

    Figure 6. Selecting the migration node.

    The next page displayed lists the steps and settings involved with creating the migration link.

    Click the Next button to continue.

  • HCS-CA On-Ramps Guide for HDI Page 10

    October 2013 Hitachi Data Systems

    Figure 7. HDI Migration Wizard Link creation steps.

    The Policy Name page displays. Enter a name for the policy. A description for the policy is

    optional. Click Next to continue.

  • HCS-CA On-Ramps Guide for HDI Page 11

    October 2013 Hitachi Data Systems

    Figure 8. Setting the Migration Wizard policy name.

    On the Source/Target page, use the drop-down list under File System Name to select the

    appropriate File System.

    Enter the HCS-CA service Namespace name in the section, Input the Migration Target.

  • HCS-CA On-Ramps Guide for HDI Page 12

    October 2013 Hitachi Data Systems

    Figure 9. HDI Migration Wizard setting the source and target.

    After you have set up the migration policy parameters, click Next.

    HDI uses a two step process for migrating data to HCS-CA:

    Step 1: Migrate all data meeting the conditions set in the configured migration policy.

    Step 2: Create stub files when the file system reaches its configured threshold, beginning with the oldest files first.

    Configuring a Migration Policy

    Still in the Migration Wizard, in the Criteria step, this is where you create the migration policies.

    The first step is to choose the files controlled by a given policy.

    Figure 10. HDI Migration Wizard adding migration policies.

    Use the drop-down lists to specify the rules to select the files. To add additional conditions, click

    the plus (+) button. To remove a condition, click the minus (-) button. To reset all the rules, click

    Reset.

    When satisfied with your rule set, click Next to continue. In the next step, create a schedule for

    running the migration policy.

  • HCS-CA On-Ramps Guide for HDI Page 13

    October 2013 Hitachi Data Systems

    Figure 11. HDI Migration Wizard schedule migration policy.

    Click Next to review and confirm the policy. You can also enter an optional policy description at

    this point. When satisfied, click Confirm to continue.

  • HCS-CA On-Ramps Guide for HDI Page 14

    October 2013 Hitachi Data Systems

    Figure 12. HDI Migration Wizard review and confirm migration policy.

    When finished, the migration policy is added to the Task Management list as a scheduled item.

    You should verify it is correct.

    Figure 13. HDI Task Management list showing scheduled migration tasks.

    Specifying the Stub Threshold

    A stub file is a small pointer file used as a placeholder to tell the system that its parent (original)

    file is being stored elsewhere. For example, a stub file on a local file system or hard disk will

    actually route requests to access the file automatically wherever it is being storedin this case,

    HCS-CA. The process of accessing the file pointed to by the stub is entirely transparent to the

    user or application.

    The decision when to create stub files and migrate data to HCS-CA is governed by the stub

    threshold.

    The stub threshold specifies the conditions when file stub processing will be performed. For

    example, when the remaining file system capacity is equal to or less than the configured

  • HCS-CA On-Ramps Guide for HDI Page 15

    October 2013 Hitachi Data Systems

    threshold value, HDI performs stub processing on files until the file system capacity is higher than

    the threshold amount again. Stub processing is performed on the oldest files first.

    The default stub threshold setting is 10%. Therefore, for example, a file system with a capacity of

    200 GB would trigger the stubbing process whenever free space drops below 20 GB.

    To set the threshold, use the CLI command sudo arcreplimitset. The following example

    shows how to set the stub threshold to 20% for the file system cifs1.

    service@node-0:~$ sudo arcreplimitset --rest-size 20% --file-system cifs1

    service@node-0:~$ sudo arcreplimitget --file-system cifs1

    Replication limit rest size : 20%

    service@node-0:~$

    Note: If you are copying data to HDI and migrating data to the HCP at the same time, and the

    stubbing process begins, it can have an effect on CIFS access and result in network errors.

    Monitoring Namespace Hard Quotas

    Because HCS-CA does not support MAPI calls, it is not possible to set up Namespace quotas

    directly on HDI. This means data migrations can exceed the subscribed storage within the

    Namespace. The checking performed by HCP is a loose compare (asynchronous check) to the

    regular file system quota.

    The following illustration shows an example HDI data migration where the system has exceeded

    the quota by 82 GB. The orange line in the bottom left graph represents the migrated data.

    Figure 14. HDI Cloud Namespace quota exceeded.

    Monitoring Tenant Hard Quotas

    As with the Namespace hard quota, the Tenant (overall HCS-CA subscription) quota is also done

    as a loose compare (asynchronous check) with the regular file system quota. The result of this is

  • HCS-CA On-Ramps Guide for HDI Page 16

    October 2013 Hitachi Data Systems

    the Tenant hard quota can be in an oversubscribed state, but on the next incremental data

    migration, it will fail.

    For example, in the following illustration, the bar graph in the upper right shows the Tenant (hdi1)

    subscription of 200 GB has been exceeded by 41.97 GB.

    Figure 15. HDI Tenant hard quota exceeded.

    Upon the next migration process, the status report indicates it failed. In the following illustration, it

    reports near the end that of the 641 files to be migrated, they all failed.

    Figure 16. HDI migration failure due to quota being exceeded.

    Setting Up Quota Information

    System administrators can set quotas for file systems or directories, and is done from the

    command-line interface (CLI). Directory quotas are also called subtree quotas.

  • HCS-CA On-Ramps Guide for HDI Page 17

    October 2013 Hitachi Data Systems

    Note: For more information on how to set quotas, see the Hitachi Data Ingestor Cluster

    Administrators Guide or the Hitachi Data Ingestor CLI Administrators Guide.

    Some additional guidelines:

    A CIFS client cannot view quota information from the Windows properties display. To do this, use the File Services Manager.

    A default quota cannot be set for a group.

    To set the default quota use the command sudo quotaset. In the following example, the

    default soft quota is set to 180 GB and the hard quota to 190 GB.

    service@node-0:~$ sudo quotaset -d -b 180g,190g cifs1

    service@node-0:~$

    A quota monitoring schedule also needs to be set. In the next example, the quota is monitored

    every six hours.

    service@node-0:~$ sudo quotaset -m -s 01:00,06:00,13:00,19:00 cifs1

    service@node-0:~$

    To check the quota monitoring settings:

    service@node-0:~$ sudo quotaget -c -m cifs1

    cifs1:Use a summary notification:0100,0600,1300,1900

    service@node-0:~$ sudo quotaget -c -d cifs1

    cifs1:184320:194560:0:0

    service@node-0:~$

    Recovering From Exceeding a Storage Quota Limit

    If you have exceeded storage hard quota limits, you will need to increase the quota limits on both

    the HCS-CA Namespace and the Tenant.

    In the following overview display illustration, after the original quota of 200 GB was exceeded, it

    was increased to 250 GB.

    Figure 17. HDI quotas increased.

    The following illustration shows in the middle column a series of warning messages as the

    migrated data approached and then exceeded both the soft and hard quota limits. In the bottom

  • HCS-CA On-Ramps Guide for HDI Page 18

    October 2013 Hitachi Data Systems

    left, orange line on the graph shows when this happened. The blue line shows how to the right of

    the graph, the quota was increased.

    Figure 18. HDI Cloud quota usage details.

    At this point, migration tasks can be resumed.

  • HCS-CA On-Ramps Guide for HDI Page 19

    October 2013 Hitachi Data Systems

    Figure 19. HDI migration task successful.

    Backing Up the HDI Configuration

    Because MAPI calls are not available, HDI cannot create a system-backup-data Namespace on

    the HCP.

    Figure 20. HDI system backup failure.

    To resolve this, you need to create a system-backup-data Namespace manually on the HCP,

    using the same username as the current Namespace. The following illustration shows one such

    example Namespace.

  • HCS-CA On-Ramps Guide for HDI Page 20

    October 2013 Hitachi Data Systems

    Figure 21. HDI system-backup-data Namespace.

    Once this is done, you will be able to create or modify the backup schedule and transfer the

    system backup files to the Namespace.

    Figure 22. HDI system backup schedule.

  • HCS-CA On-Ramps Guide for HDI Page 21

    October 2013 Hitachi Data Systems

    Hitachi Data Systems

    Corporate Headquarters

    2845 Lafayette Street Santa Clara, California 95050-2639 U.S.A. www.hds.com

    Regional Contact Information

    Americas

    +1 408 970 1000 [email protected]

    Europe, Middle East, and Africa +44 (0) 1753 618000 [email protected]

    Asia Pacific +852 3189 7900

    [email protected]