user guide (ai beginners) - huawei cloud...table 2-1 parameter description parameter description...

203
ModelArts User Guide (AI Beginners) Issue 01 Date 2020-02-25 HUAWEI TECHNOLOGIES CO., LTD.

Upload: others

Post on 11-Mar-2020

15 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

ModelArts

User Guide (AI Beginners)

Issue 01

Date 2020-02-25

HUAWEI TECHNOLOGIES CO., LTD.

Page 2: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Copyright © Huawei Technologies Co., Ltd. 2020. All rights reserved.

No part of this document may be reproduced or transmitted in any form or by any means without priorwritten consent of Huawei Technologies Co., Ltd. Trademarks and Permissions

and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.All other trademarks and trade names mentioned in this document are the property of their respectiveholders. NoticeThe purchased products, services and features are stipulated by the contract made between Huawei andthe customer. All or part of the products, services and features described in this document may not bewithin the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,information, and recommendations in this document are provided "AS IS" without warranties, guaranteesor representations of any kind, either express or implied.

The information in this document is subject to change without notice. Every effort has been made in thepreparation of this document to ensure accuracy of the contents, but all statements, information, andrecommendations in this document do not constitute a warranty of any kind, express or implied.

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. i

Page 3: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Contents

1 How Do AI Beginners Use ModelArts?............................................................................... 1

2 Data Management.................................................................................................................. 32.1 Data Management Overview.............................................................................................................................................. 32.2 Creating a Dataset.................................................................................................................................................................. 52.3 Importing Data.........................................................................................................................................................................82.3.1 Import Operation................................................................................................................................................................. 82.3.2 Specifications for Importing Datasets........................................................................................................................ 102.3.2.1 Specifications for Importing Data from an OBS Directory.............................................................................. 112.3.2.2 Specifications for Importing the Manifest File.....................................................................................................132.4 Labeling Data......................................................................................................................................................................... 222.4.1 Image Classification..........................................................................................................................................................222.4.2 Object Detection................................................................................................................................................................ 282.4.3 Text Classification.............................................................................................................................................................. 342.4.4 Named Entity Recognition............................................................................................................................................. 372.4.5 Sound Classification..........................................................................................................................................................402.4.6 Speech Labeling................................................................................................................................................................. 442.4.7 Speech Paragraph Labeling............................................................................................................................................462.5 Publishing a Dataset............................................................................................................................................................ 482.6 Managing Dataset Versions.............................................................................................................................................. 492.7 Modifying a Dataset............................................................................................................................................................ 512.8 Team Labeling........................................................................................................................................................................ 532.8.1 Team Labeling Overview................................................................................................................................................ 532.8.2 Team Management...........................................................................................................................................................552.8.3 Member Management.....................................................................................................................................................562.8.4 Managing Team Labeling Tasks................................................................................................................................... 582.9 Deleting a Dataset................................................................................................................................................................62

3 Training Management..........................................................................................................643.1 Model Training Overview................................................................................................................................................... 643.2 Built-in Algorithms............................................................................................................................................................... 653.2.1 Introduction to Built-in Algorithms............................................................................................................................. 653.2.2 Requirements for Datasets............................................................................................................................................. 663.2.3 Algorithms and Their Running Parameters.............................................................................................................. 68

ModelArtsUser Guide (AI Beginners) Contents

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. ii

Page 4: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

3.3 Creating a Training Job....................................................................................................................................................... 813.4 Managing Training Job Versions...................................................................................................................................... 863.5 Viewing Job Details.............................................................................................................................................................. 883.6 Managing Job Parameters................................................................................................................................................. 933.7 Managing Visualization Jobs............................................................................................................................................ 94

4 Model Management............................................................................................................. 984.1 Model Management Overview........................................................................................................................................ 984.2 (Optional) Purchasing Model Tuning............................................................................................................................ 994.3 Importing a Model............................................................................................................................................................. 1004.3.1 Importing a Meta Model from a Training Job...................................................................................................... 1004.3.2 Importing a Meta Model from a Template............................................................................................................1034.3.3 Importing a Meta Model from a Container Image............................................................................................. 1064.3.4 Importing a Meta Model from OBS......................................................................................................................... 1094.4 Managing Model Versions...............................................................................................................................................1134.5 Publishing a Model............................................................................................................................................................ 1134.6 Model Compression and Conversion........................................................................................................................... 1154.6.1 Compressing and Converting Models...................................................................................................................... 1154.6.2 Model Input Path Specifications................................................................................................................................ 1184.6.3 Model Output Path Description................................................................................................................................. 1194.6.4 Conversion Templates....................................................................................................................................................120

5 Model Deployment............................................................................................................. 1255.1 Model Deployment Overview........................................................................................................................................ 1255.2 Real-Time Services..............................................................................................................................................................1255.2.1 Deploying a Model as a Real-Time Service............................................................................................................1255.2.2 Viewing Service Details................................................................................................................................................. 1285.2.3 Testing the Service.......................................................................................................................................................... 1305.2.4 Accessing a Real-Time Service.................................................................................................................................... 1315.2.5 Publishing to AI Market................................................................................................................................................ 1355.3 Batch Services...................................................................................................................................................................... 1365.3.1 Deploying a Model as a Batch Service.................................................................................................................... 1365.3.2 Viewing the Batch Service Prediction Result......................................................................................................... 1395.4 Edge Services....................................................................................................................................................................... 1405.4.1 Deploying an Edge Service.......................................................................................................................................... 1405.4.2 Accessing an Edge Service........................................................................................................................................... 1425.5 Modifying a Service........................................................................................................................................................... 1465.6 Starting or Stopping a Service....................................................................................................................................... 1475.7 Deleting a Service...............................................................................................................................................................148

6 AI Market (Old Version)....................................................................................................149

7 AI Market (New Version).................................................................................................. 151

8 Resource Pools..................................................................................................................... 152

ModelArtsUser Guide (AI Beginners) Contents

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. iii

Page 5: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

9 Model Templates.................................................................................................................1569.1 Model Template Overview.............................................................................................................................................. 1569.2 Template Description........................................................................................................................................................ 1589.2.1 TensorFlow-Based Image Classification Template...............................................................................................1589.2.2 TensorFlow-py27 General Template......................................................................................................................... 1599.2.3 TensorFlow-py36 General Template......................................................................................................................... 1609.2.4 MXNet-py27 General Template..................................................................................................................................1619.2.5 MXNet-py36 General Template..................................................................................................................................1629.2.6 PyTorch-py27 General Template................................................................................................................................ 1639.2.7 PyTorch-py36 General Template................................................................................................................................ 1649.2.8 Caffe-CPU-py27 General Template........................................................................................................................... 1659.2.9 Caffe-GPU-py27 General Template.......................................................................................................................... 1669.2.10 Caffe-CPU-py36 General Template........................................................................................................................ 1679.2.11 Caffe-GPU-py36 General Template........................................................................................................................ 1689.3 Input and Output Modes................................................................................................................................................. 1699.3.1 Built-in Object Detection Mode................................................................................................................................. 1699.3.2 Built-in Image Processing Mode................................................................................................................................ 1719.3.3 Built-in Predictive Analytics Mode............................................................................................................................ 1729.3.4 Undefined Mode..............................................................................................................................................................174

10 Model Package Specifications........................................................................................17610.1 Model Package Specifications..................................................................................................................................... 17610.2 Specifications for Compiling the Model Configuration File.............................................................................. 17810.3 Specifications for Compiling Model Inference Code............................................................................................ 189

11 Permissions Management...............................................................................................19411.1 Creating a User and Granting Permissions............................................................................................................. 19411.2 Creating a Custom Policy.............................................................................................................................................. 195

A Change History....................................................................................................................198

ModelArtsUser Guide (AI Beginners) Contents

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. iv

Page 6: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

1 How Do AI Beginners Use ModelArts?

AI beginners with certain AI knowledge can use their own business data and selectcommon algorithms (ModelArts built-in algorithms) for model training to obtainnew models.

For details about how to use a built-in algorithm to build a model, see AIBeginners: Using a Built-in Algorithm to Build a Model.

Figure 1-1 Usage process for AI beginners

Table 1-1 Usage process

Phase Sub-task Description Link

Datapreparation

Creating adataset

Use your own business data tocreate a dataset in ModelArts tomanage and preprocess yourdata.

Creating a Dataset

ModelArtsUser Guide (AI Beginners) 1 How Do AI Beginners Use ModelArts?

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 1

Page 7: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Phase Sub-task Description Link

Labelingdata

Label the data in your datasetbased on the service logic tofacilitate subsequent training.The data labeling affects themodel training effect.

Labeling Data

Publishingthe dataset

After the data is labeled, publishthe dataset to generate adataset version for modeltraining.

Publishing aDataset

Modeltraining

Creating atraining job

Create a training job, select anavailable dataset version, andselect a built-in algorithm totrain a model. After the trainingis completed, the generatedmodel is stored in OBS.

Built-in AlgorithmsCreating a TrainingJob

(Optional)Creating aTensorBoardjob

Create a TensorBoard job toview the model training process,learn about the model, andadjust and optimize the model.TensorBoard applies only to theMXNet and TensorFlow engines.

ManagingVisualization Jobs

Modelmanagement

Importing amodel

Import the trained model toModelArts to facilitate modeldeployment.

Importing a Model

Servicedeployment

Deploying aservice

Deploy the model as a real-time, edge, or batch service.

● Deploying aModel as a Real-Time Service

● Deploying aModel as a BatchService

● Deploying anEdge Service

Accessingthe service

After the service is deployed,access the real-time or edgeservice, or view the predictionresult of the batch service.

● Accessing a Real-Time Service

● Viewing theBatch ServicePrediction Result

● Accessing anEdge Service

ModelArtsUser Guide (AI Beginners) 1 How Do AI Beginners Use ModelArts?

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 2

Page 8: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

2 Data Management

2.1 Data Management OverviewIn ModelArts, you can import and label data on the Data Management (Beta)page to prepare for model building. ModelArts uses datasets as the basis formodel development or training.

Dataset TypesModelArts supports the following types of datasets, covering images, audio, andtext.

● Image classification: identifies if an image contains an object.● Object detection: identifies the position and class of each object in an image.● Sound classification: classifies and identifies different sounds.● Speech labeling: labels speech content.● Speech paragraph labeling: segments and labels speech content.● Text classification: assigns labels to text according to its content.● Named entity recognition: assigns labels to named entities in text, such as

time and locations.● Text triplet: assigns labels to entity segments and entity relationships in the

text.● Free format: manages data in any format. Currently, labeling is not available

for data of the free format type. The free format type is applicable toscenarios where labeling is not required or developers customize labeling.

Precautions● The datasets created in ModelArts cannot be used in ExeML projects.● Currently, new Data Management (Beta) and old Data Management

modules coexist for dataset management on ModelArts. The old DataManagement module is about to go offline and is hidden from the leftnavigation pane of ModelArts. You are advised to use the Data Management(Beta) module to manage datasets. If you have stored data in the old Data

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 3

Page 9: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Management module, migrate the data in a timely manner. To access the oldData Management module that is hidden from the left navigation pane, clickDeprecated Dataset Page in the upper right corner of the DataManagement (Beta).

Dataset Management Process and Functions

Figure 2-1 Labeling management process

Table 2-1 Function description

Function Description

Creating a Dataset Create a dataset.

Image ClassificationObject DetectionText ClassificationNamed Entity RecognitionSound ClassificationSpeech LabelingSpeech Paragraph Labeling

Label data based on the eight types ofdatasets. Labeling is not available fordata of the free format type.

Import Operation Import the local manifest file or datastored in OBS to the dataset.

Modifying a Dataset Modify the basic information about adataset, such as the dataset name,description, and labels.

Publishing a Dataset Publish the labeled dataset as a newversion for model building.

Managing Dataset Versions View data version updates.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 4

Page 10: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Function Description

Team Labeling Overview Allow multiple users to label the samedataset and enable the dataset creatorto manage labeling tasks in a unifiedmanner. Add a team and its membersto participate in labeling datasets.

Deleting a Dataset Delete a dataset to release resources.

2.2 Creating a DatasetTo manage data using ModelArts, you need to create a dataset first. Then you canperform operations on the dataset, such as labeling data, importing data, andpublishing the dataset.

Prerequisites● Before using the data management function, you need to have permission to

access OBS. This function cannot be used if you have not been authorized toaccess OBS. You can choose Data Management (Beta) > Datasets in the leftnavigation pane. On the displayed page, click Service Authorization to applyfor permission authorization.

● You have created OBS buckets and folders for storing data. In addition, theOBS buckets and ModelArts are in the same region.

● You have uploaded data to be used to OBS.

Procedure1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. Click Create Dataset. On the displayed Create Dataset page, set parametersbased on Table 2-2 and click Create.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 5

Page 11: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-2 Create Dataset

Table 2-2 Parameter description

Parameter Description

Name Enter the name of the dataset. A dataset name cancontain only letters, digits, underscores (_), and hyphens(-).

Description Enter a brief description for the dataset.

Input DatasetPath Click to select an OBS path where the dataset to be

inputted is stored.

OutputDataset Path Click to select an OBS path for storing your labeled

dataset.NOTE

The output dataset path cannot be the same as the input datasetpath or cannot be the subdirectory of the input dataset path.

LabelingScene

Select Object, Audio, Text, or Other.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 6

Page 12: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Labeling Type ● If Object is selected for Labeling Scene:– Image classification: identifies if an image contains

an object.– Object detection: identifies the position and class of

each object in an image.● When Audio is selected for Labeling Scene:

– Sound classification: classifies and identifies differentsounds.

– Speech labeling: labels speech content.– Speech paragraph labeling: segments and labels

speech content.● If Text is selected for Labeling Scene:

– Text classification: assigns labels to text according toits content.

– Named entity recognition: assigns labels to namedentities in text, such as time and locations.

– Text triplet: assigns labels to entity segments andentity relationships in the text.

● If Other is selected for Labeling Scene:– Free format: manages data in any format. The free

format type is applicable to scenarios where labelingis not required or developers customize labeling.

Label Set ● Label Name: Enter a label name. The label name cancontain only Chinese characters, letters, digits,underscores (_), and hyphens (-). The name contains 1to 32 characters.

● Add Label Attribute: Label attributes can be added tothe datasets of the object detection type.

● Add Label: Click to add a label.

● Label Color: Click and select a color from thecolor palette shown in the following figure, or enter thehexadecimal color code to set the color.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 7

Page 13: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

TeamLabeling

When Labeling Type is set to Object detection, the teamlabeling function is supported. You can enable or disableteam labeling.Before enabling the team labeling function, ensure thatyou have added a team and members on the LabelingTeam page. If no labeling team is available, click the linkon the interface to go to the Labeling Team page, andadd your team and members. For details, see TeamLabeling Overview.After a dataset is created with team labeling enabled, youcan view the Team Labeling mark in Labeling Type.

After the dataset is created, the dataset management page is displayed. Youcan perform the following operations on the dataset: label data, publish,manage versions, modify, import, and delete.

2.3 Importing Data

2.3.1 Import OperationAfter a dataset is created, you can directly synchronize data from the dataset.Alternatively, you can import more data by importing the dataset. Currently, datacan be imported from an OBS directory or the manifest file.

Prerequisites● You have created a dataset.● You have stored the data to be imported to OBS. You have stored the

manifest file in OBS.● Ensure that the OBS buckets and ModelArts are in the same region.

Import ModesThere are two import modes: OBS path and Manifest file.

● OBS path: indicates that the dataset to be imported has been stored in anOBS directory in advance and data is imported from the OBS directory. In thiscase, you need to select an OBS path that you can access. Additionally, thedirectory structure in the OBS path must comply with the specifications. Fordetails, see Specifications for Importing Data from an OBS Directory. Onlythe following types of dataset support the OBS path import mode: Imageclassification, Object detection, Text classification, and Soundclassification.

● Manifest file: indicates that the dataset file is in the manifest format anddata is imported from the manifest file. The manifest file defines the mappingbetween labeling objects and content. Additionally, the manifest file has been

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 8

Page 14: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

uploaded to OBS. The maximum size of a manifest file is 8 MB. For detailsabout the specifications of the manifest file, see Specifications for Importingthe Manifest File.

Importing Data from an OBS DirectoryThe parameters on the GUI for data import vary according to the dataset type.The following uses a dataset of the image classification type as an example.

1. Log in to the ModelArts management console. In the left navigation pane,choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. Locate the row that contains the desired dataset and choose More > Importin the Operation column.Alternatively, you can click the dataset name to go to the Dashboard tabpage of the dataset, and click Import in the upper right corner.

3. In the Import dialog box, set Import Mode to OBS path and set OBS path tothe path for storing data. Then click OK.

Figure 2-3 Importing the dataset to an OBS path

After the data import is successful, the data is automatically synchronized tothe dataset. On the Datasets page, you can click the dataset name to view itsdetails and label the data.

Importing Data from a Manifest FileThe parameters on the GUI for data import vary according to the dataset type.The following uses a dataset of the object detection type as an example.

1. Log in to the ModelArts management console. In the left navigation pane,choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. Locate the row that contains the desired dataset and choose More > Importin the Operation column.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 9

Page 15: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Alternatively, you can click the dataset name to go to the Dashboard tabpage of the dataset, and click Import in the upper right corner.

3. In the Import dialog box, set the parameters as follows and click OK.– Import Mode: Select Manifest file.– Manifest file: Select the OBS path for storing the manifest file.– Import by Label: The system automatically obtains the labels of the

dataset. You can click to add a label or click on theright to delete a label. This field is optional. After importing a dataset,you can add or delete labels during data labeling.

– Import labels: If this parameter is selected, the labels defined in themanifest file are imported to the ModelArts dataset.

– Import only hard examples: If this parameter is selected, only the hardattribute data of the manifest file is imported. Examples whose hardattribute is true in the manifest file are hard examples.

Figure 2-4 Import

After the data import is successful, the data is automatically synchronized tothe dataset. On the Datasets page, you can click the dataset name to go tothe Dashboard tab page of the dataset, and click Label in the upper rightcorner. On the displayed dataset details page, view detailed data and labeldata.

2.3.2 Specifications for Importing Datasets

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 10

Page 16: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

2.3.2.1 Specifications for Importing Data from an OBS Directory

When a dataset is imported, the data storage directory and file name mustcomply with the ModelArts specifications if the data to be used is stored in OBS.

Only the following types of dataset support the OBS path import mode: Imageclassification, Object detection, Text classification, and Sound classification.Therefore, the following describes only the specifications of the four types ofdataset.

Image Classification

For image classification, images with the same label must be stored in the samedirectory, and the label name is the directory name.

In the following example, Cat and Dog are label names.

dataset-import-example ├─Cat │ 10.jpg │ 11.jpg │ 12.jpg │ └─Dog 1.jpg 2.jpg 3.jpg

● If data is imported from an OBS path, you must have the permission to readthe OBS path.

● Only single labels are supported.

● Only images in JPG, JPEG, PNG, and BMP formats are supported. The size of asingle image cannot exceed 5 MB, and the total size of all images uploaded ata time cannot exceed 8 MB.

Object Detection

The simple mode of object detection requires users store labeled objects and theirlabeling files (in one-to-one relationship with the labeled objects) in the samedirectory. For example, if the name of the labeled object file isIMG_20180919_114745.jpg, the name of the labeling file must beIMG_20180919_114745.xml.

The labeling files for object detection must be in PASCAL VOC format. For detailsabout the format, see Table 2-7.

Example:

├─dataset-import-example │ IMG_20180919_114732.jpg │ IMG_20180919_114732.xml │ IMG_20180919_114745.jpg │ IMG_20180919_114745.xml │ IMG_20180919_114945.jpg │ IMG_20180919_114945.xml

● If data is imported from an OBS path, you must have the permission to readthe OBS path.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 11

Page 17: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

● Only images in JPG, JPEG, PNG, and BMP formats are supported. The size of asingle image cannot exceed 5 MB, and the total size of all images uploaded ata time cannot exceed 8 MB.

Text ClassificationThe labeled objects and labeling files for text classification are text files, andcorrespond to each other based on the rows. For example, the first row in alabeling file indicates the labeling of the first row in the file of the labeled object.

For example, the content of labeled object COMMENTS_20180919_114745.txt isas follows:

It touches good and responds quickly. I don't know how it performs in the future.Three months ago, I bought a very good phone and replaced my old one with it. It can operate longer between charges.Why does my phone heat up if I charge it for a while? The volume button stuck after being pressed down.It's a gift for Father's Day. The logistics is fast and I received it in 24 hours. I like the earphones because the bass sounds feel good and they would not fall off.

The content of labeling file COMMENTS_20180919_114745_result.txt is asfollows:

positive positivenegativenegative positive

The simple mode requires users store labeled objects and their labeling files (inone-to-one relationship with the labeled objects) in the same directory. Forexample, if the name of the labeled object file isCOMMENTS_20180919_114745.txt, the name of the labeling file must beCOMMENTS _20180919_114745_result.txt.

Example of data file storage:

├─dataset-import-example │ COMMENTS_20180919_114732.txt │ COMMENTS _20180919_114732_result.txt │ COMMENTS _20180919_114745.txt │ COMMENTS _20180919_114745_result.txt │ COMMENTS _20180919_114945.txt │ COMMENTS _20180919_114945_result.txt

Sound ClassificationFor sound classification, sound files with the same label must be stored in thesame directory, and the label name is the directory name.

Example:

dataset-import-example ├─Cat │ 10.wav │ 11.wav │ 12.wav │ └─Dog 1.wav 2.wav 3.wav

● If data is imported from an OBS path, you must have the permission to readthe OBS path.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 12

Page 18: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

2.3.2.2 Specifications for Importing the Manifest FileThe manifest file defines the mapping between labeling objects and content. TheManifest file import mode means that the manifest file is used for datasetimport. You can import the manifest file from the local file system or from OBS.When importing a manifest file from OBS, ensure that the current user has thepermission to access the directory housing the manifest file.

The manifest file that contains information about the original file and labeling canbe used in labeling, training, and inference scenarios. The manifest file thatcontains only information about the original file can be used in inference scenariosor used to generate an unlabeled dataset. The manifest file must meet thefollowing requirements:

● The manifest file uses the UTF-8 encoding format. The source value of textclassification can contain Chinese characters. However, Chinese characters arenot recommended for other parameters.

● The manifest file uses the JSON Lines format (jsonlines.org). A line containsone JSON object.{"source": "/path/to/image1.jpg", "annotation": ... }{"source": "/path/to/image2.jpg", "annotation": ... }{"source": "/path/to/image3.jpg", "annotation": ... }

In the preceding example, the manifest file contains multiple lines of JSONobject.

● The manifest file can be generated by users, third-party tools, or ModelArtsData Labeling. The file name can be any valid file name. To facilitate theinternal use of the ModelArts system, the file name generated by theModelArts Data Labeling function consists of the following character strings:DatasetName-VersionName.manifest. For example, animal-v201901231130304123.manifest.

Image Classification{"source":"s3://path/to/image1.jpg", "usage":"TRAIN", "hard":"true", "hard-coefficient":0.8,"id":"0162005993f8065ef47eefb59d1e4970", "annotation": [ { "type": "modelarts/image_classification", "name": "cat", "property": { "color":"white", "kind":"Persian cat" }, "hard":"true", "hard-coefficient":0.8, "annotated-by":"human", "creation-time":"2019-01-23 11:30:30" }, { "type": "modelarts/image_classification", "name":"animal", "annotated-by":"modelarts/active-learning", "confidence": 0.8, "creation-time":"2019-01-23 11:30:30" }

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 13

Page 19: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

],"inference-loc":"/path/to/inference-output"}

Table 2-3 Parameter description

Parameter Mandatory

Description

source Yes URI of an object to be labeled. For details aboutdata source types and examples, see Table 2-4.

usage No By default, the parameter value is left blank.Possible values are as follows:● TRAIN: indicates that the object is used for

training.● EVAL: indicates that the object is used for

evaluation.● TEST: indicates that the object is used for

testing.● INFERENCE: indicates that the object is used

for inference.If the parameter value is left blank, the userdecides how to use the object.

annotation No If the parameter value is left blank, the object isnot labeled. The value of annotation consists ofan object list. For details about the parameters,see Table 2-5.

inference-loc No This parameter is available when the file isgenerated by the inference service, indicating thelocation of the inference result file.

Table 2-4 Data source types

Type Example

OBS "source":"s3://path-to-jpg"

Content "source":"content://I love machine learning"

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 14

Page 20: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Table 2-5 Description of annotation objects

Parameter Mandatory

Description

type Yes Label type. Possible values are as follows:● image_classification: image classification● text_classification: text classification● text_entity: named entity recognition● object_detection: object detection● audio_classification: sound classification● audio_content: speech labeling● audio_segmentation: speech paragraph labeling

name Yes/No This parameter is mandatory for the classificationtype but optional for other types. This example usesthe image classification type.

property No Labeling property. In this example, the cat has twoproperties: color and kind.

hard No Indicates whether the example is a hard example.True indicates that the labeling example is a hardexample, and False indicates that the labelingexample is not a hard example.

annotated-by No The default value is human, indicating manuallabeling.

creation-time No Time when the labeling job was created. It is thetime when labeling information was written, notthe time when the manifest file was generated.

confidence No Confidence score of machine labeling. The valueranges from 0 to 1.

Text Classification{"source": "content://I like this product ", "id":"XGDVGS", "annotation": [ { "type": "modelarts/text_classification", "name": " positive", "annotated-by": "human", "creation-time": "2019-01-23 11:30:30" } ]}{"source ": "content://I do not want to use it", "annotation": [ { "type": "modelarts/ text_classification", "name": " negative",

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 15

Page 21: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

"annotated-by": "human", "creation-time": "2019-01-23 11:30:30" } ]}

The content parameter indicates the text to be labeled (in UTF-8 encodingformat, which can be Chinese). The other parameters are the same as thosedescribed in Image Classification. For details, see Table 2-3.

Named Entity Recognition{ "source":"content://Michael Jordan is the most famous basketball player in the world.", "usage":"TRAIN", "annotation":[ { "type":"modelarts/text_entity", "name":"Person", "property":{ "@modelarts:start_index":0, "@modelarts:end_index":14 }, "annotated-by":"human", "creation-time":"2019-01-23 11:30:30" }, { "type":"modelarts/text_entity", "name":"Category", "property":{ "@modelarts:start_index":34, "@modelarts:end_index":44 }, "annotated-by":"human", "creation-time":"2019-01-23 11:30:30" } ]}

The parameters such as source, usage, and annotation are the same as thosedescribed in Image Classification. For details, see Table 2-3.

Table 2-6 describes the property parameters. For example, if you want to extractMichael from "source":"content://Michael Jordan", the value of start_index is 0and that of end_index is 7.

Table 2-6 Description of property parameters

Parameter Data Type Description

@modelarts:start_index

Integer Start position of the text. The value startsfrom 0, including the characters specifiedby start_index.

@modelarts:end_index

Integer End position of the text, excluding thecharacters specified by end_index.

Object Detection{"source":

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 16

Page 22: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

"s3://path/to/image1.jpg", "usage":"TRAIN", "hard":"true", "hard-coefficient":0.8, "annotation": [ { "type":"modelarts/object_detection", "annotation-loc": "s3://path/to/annotation1.xml", "annotation-format":"PASCAL VOC", "annotated-by":"human", "creation-time":"2019-01-23 11:30:30" }]}

● The parameters such as source, usage, and annotation are the same asthose described in Image Classification. For details, see Table 2-3.

● annotation-loc: indicates the path for saving the labeling file. This parameteris mandatory for object detection but optional for other types.

● annotation-format: indicates the format of the labeling file. This parameteris optional. The default value is PASCAL VOC. Currently, only PASCAL VOC issupported.

Table 2-7 PASCAL VOC format description

Parameter Mandatory

Description

folder Yes Directory where the data source is located

filename Yes Name of the file to be labeled

size Yes Image pixel● width: image width. This parameter is mandatory.● height: image height. This parameter is

mandatory.● depth: number of image channels. This parameter

is mandatory.

segmented Yes Segmented or not

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 17

Page 23: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Mandatory

Description

object Yes Object detection information. Multiple object{}functions are generated for multiple objects.● name: indicates the class of the labeled content.

This parameter is mandatory.● pose: indicates the shooting angle of the labeled

content. This parameter is mandatory.● truncated: indicates whether the labeled content

is truncated (0 indicates that the content is nottruncated). This parameter is mandatory.

● occluded: indicates whether the labeled content isoccluded (0 indicates that the content is notoccluded). This parameter is mandatory.

● difficult: indicates whether the labeled object isdifficult to identify (0 indicates that the object iseasy to identify). This parameter is mandatory.

● confidence: indicates the confidence score of thelabeled object. The value range is 0 to 1. Thisparameter is optional.

● bndbox: indicates the labeling box type. Thisparameter is mandatory. For details about thepossible values, see Table 2-8.

Table 2-8 Description of labeling box types

Type Shape Labeling Information

point Point Coordinates of a point<x>100<x><y>100<y>

line Line Coordinates of points<x1>100<x1><y1>100<y1><x2>200<x2><y2>200<y2>

bndbox Rectangle Coordinates of the upper left and lowerright points<xmin>100<xmin><ymin>100<ymin><xmax>200<xmax><ymax>200<ymax>

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 18

Page 24: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Type Shape Labeling Information

polygon Polygon Coordinates of points<x1>100<x1><y1>100<y1><x2>200<x2><y2>100<y2><x3>250<x3><y3>150<y3><x4>200<x4><y4>200<y4><x5>100<x5><y5>200<y5><x6>50<x6><y6>150<y6>

circle Circle Center coordinates and radius<cx>100<cx><cy>100<cy><r>50<r>

Example:<annotation> <folder>test_data</folder> <filename>260730932.jpg</filename> <size> <width>767</width> <height>959</height> <depth>3</depth> </size> <segmented>0</segmented> <object> <name>point</name> <pose>Unspecified</pose> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <point> <x1>456</x1> <y1>596</y1> </point> </object> <object> <name>line</name> <pose>Unspecified</pose> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <line> <x1>133</x1> <y1>651</y1> <x2>229</x2> <y2>561</y2> </line> </object>

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 19

Page 25: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

<object> <name>bag</name> <pose>Unspecified</pose> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>108</xmin> <ymin>101</ymin> <xmax>251</xmax> <ymax>238</ymax> </bndbox> </object> <object> <name>boots</name> <pose>Unspecified</pose> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <hard-coefficient>0.8</hard-coefficient> <polygon> <x1>373</x1> <y1>264</y1> <x2>500</x2> <y2>198</y2> <x3>437</x3> <y3>76</y3> <x4>310</x4> <y4>142</y4> </polygon> </object> <object> <name>circle</name> <pose>Unspecified</pose> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <circle> <cx>405</cx> <cy>170</cy> <r>100<r> </circle> </object></annotation>

Sound Classification{"source":"s3://path/to/pets.wav", "annotation": [ { "type": "modelarts/audio_classification", "name":"cat", "annotated-by":"human", "creation-time":"2019-01-23 11:30:30" } ]}

The parameters such as source, usage, and annotation are the same as thosedescribed in Image Classification. For details, see Table 2-3.

Speech Labeling{ "source":"s3://path/to/audio1.wav", "annotation":[

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 20

Page 26: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

{ "type":"modelarts/audio_content", "property":{ "@modelarts:content":"Today is a good day." }, "annotated-by":"human", "creation-time":"2019-01-23 11:30:30" } ]}

● The parameters such as source, usage, and annotation are the same asthose described in Image Classification. For details, see Table 2-3.

● The @modelarts:content parameter in property indicates speech labeling.The data type is String.

Speech Paragraph Labeling{ "source":"s3://path/to/audio1.wav", "usage":"TRAIN", "annotation":[ { "type":"modelarts/audio_segmentation", "property":{ "@modelarts:start_time":"00:01:10.123", "@modelarts:end_time":"00:01:15.456", "@modelarts:source":"Tom", "@modelarts:content":"How are you?" }, "annotated-by":"human", "creation-time":"2019-01-23 11:30:30" }, { "type":"modelarts/audio_segmentation", "property":{ "@modelarts:start_time":"00:01:22.754", "@modelarts:end_time":"00:01:24.145", "@modelarts:source":"Jerry", "@modelarts:content":"I'm fine, thank you." }, "annotated-by":"human", "creation-time":"2019-01-23 11:30:30" } ]}

● The parameters such as source, usage, and annotation are the same asthose described in Image Classification. For details, see Table 2-3.

● Table 2-9 describes the property parameters.

Table 2-9 Description of property parameters

Parameter Data Type Description

@modelarts:start_time

String Start time of the sound. The format ishh:mm:ss.SSS.hh indicates the hour, mm indicates theminute, ss indicates the second, and SSSindicates the millisecond.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 21

Page 27: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Data Type Description

@modelarts:end_time

String End time of the sound. The format ishh:mm:ss.SSS.hh indicates the hour, mm indicates theminute, ss indicates the second, and SSSindicates the millisecond.

@modelarts:source

String Sound source

@modelarts:content

String Sound content

2.4 Labeling Data

2.4.1 Image ClassificationModel training uses a large number of labeled images. Therefore, before themodel training, add labels to the images that are not labeled. You can add labelsto images by manual labeling or auto labeling. Additionally, you can modify thelabels of images, or remove their labels and label the images again.

Before labeling an image in image classification scenarios, you need to understandthe following:

● Image labeling supports multiple labels. That is, you can add multiple labelsto an image.

● A label name can contain a maximum of 32 characters, including Chinesecharacters, uppercase letters, lowercase letters, digits, hyphens (-), andunderscores (_).

Start Labeling1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. In the dataset list, select the dataset to be labeled based on the labeling type,and click the dataset name to go to the Dashboard tab page of the dataset.By default, the Dashboard tab page of the current dataset version isdisplayed. If you need to label the dataset of another version, click theManage Version tab and then click Set to Current Version in the right pane.For details, see Managing Dataset Versions.

3. On the Dashboard page of the dataset, click Label in the upper right corner.The dataset details page is displayed. By default, all data of the dataset isdisplayed on the dataset details page.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 22

Page 28: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Synchronizing the Data SourceModelArts automatically synchronizes data and labeling information from InputDataset Path to the dataset details page.

To quickly obtain the latest data in the OBS bucket, click Synchronize DataSource on the All or Unlabeled tab page of the dataset details page to add thedata uploaded using OBS to the dataset.

Filtering DataOn the Dashboard page of the dataset, click Label in the upper right corner. Thedataset details page is displayed, showing all data in the dataset by default. Onthe All, Unlabeled, or Labeled tab page, you can add filter criteria in the filtercriteria area to quickly filter out data you want to view.

The following filter criteria are supported. You can set one or more filter criteria.

● Example Type: Select Hard example or Non-hard example.● Label: Select All or one or more labels you specified.● Sample Creation Time: Select Within 1 month, Within 1 day, or Custom to

customize a time range.● File Name or Path: Filter files by file name or file storage path.● Labeled By: Select the name of the user who performs the labeling operation.● Sample Attribute: Select the attribute generated by auto grouping.

Figure 2-5 Filter criteria

Labeling Images (Manually)The dataset details page displays images on the All, Labeled, and Unlabeled tabs.Images on the All tab page are displayed by default. Click an image to preview it.For the images that have been labeled, the label information is displayed at thebottom of the preview page.

1. On the Unlabeled tab page, select the images to be labeled.– Manual selection: In the image list, click the selection box in the upper

left corner of an image to enter the selection mode, indicating that theimage is selected. You can select multiple images of the same type andadd labels to them together.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 23

Page 29: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

– Batch selection: If all the images on the current page of the image listbelong to the same type, you can click Select Current Page in the upperright corner to select all the images on the current page.

2. Add labels.

a. In the label adding area on the right, set the label in the Label text box.Method 1 (the required label already exists): Click the Label text box andselect an existing label from the drop-down list.Method 2 (adding a label): In the Label text box, enter a new label nameand click Add.

b. Confirm the Labels of Selected Image information and click OK. Theselected image is automatically moved to the Labeled tab page. On theUnlabeled and All tab pages, the labeling information is updated alongwith the labeling process, including the added label names and thenumber of images corresponding to each label.

Figure 2-6 Adding labels

Confirming Hard ExamplesOn the dataset details page, select one or more images to be labeled as hardexamples, and click Confirm Hard Example. Then, the selected images carry thehard example attribute. You can filter data that belongs to hard examples.

Viewing Labeled ImagesOn the dataset details page, click the Labeled tab to view the list of the labeledimages. You can click an image to view the label information about the image inthe Labels of Selected Image area on the right.

Modifying Labeling InformationAfter labeling data, you can modify labeled data on the Labeled tab page.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 24

Page 30: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

● Modifying based on imagesOn the dataset details page, click the Labeled tab, and select one or moreimages to be modified from the image list. Modify the image information inthe label information area on the right.– Adding a label: In the Label text box, select an existing label, or enter a

new label name and click OK to add the label to the selected image.

– Modifying a label: In the File Labels area, click in the Operation

column, enter the correct label name in the text box, and click tocomplete the modification.

Figure 2-7 Modifying a label

– Deleting a label: In the Labels of Selected Image area, click in theOperation column to delete the label.

● Modifying based on labelsOn the dataset details page, click the Labeled tab. The information about alllabels is displayed on the right.

Figure 2-8 Information about all labels

– Modifying a label: Click in the Operation column. In the dialog boxthat is displayed, enter the new label name and click OK. After themodification, the images that have been added with the label use thenew label name.

– Deleting a label: Click in the Operation column. In the displayeddialog box, select Delete label, Delete label and images with only thelabel (Do not delete source files), or Delete label and images withonly the label (Delete source files), and click OK.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 25

Page 31: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-9 Deleting a label

Adding ImagesIn addition to automatically synchronizing data from Input Dataset Path, you candirectly add images on ModelArts for data labeling.

1. On the dataset details page, click the All or Unlabeled tab. Then click Add.2. On the Add page that is displayed, click Add Image.

Select one or more images to be uploaded in the local environment. Onlyimages in JPG, JPEG, PNG, and BMP formats are supported. The total size ofall images uploaded at a time cannot exceed 8 MB.After the images are selected, their thumbnails and sizes are displayed on theAdd page.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 26

Page 32: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-10 Adding images

3. On the Add page, click OK.The images you have added will be automatically displayed in the image liston the Unlabeled tab page. Additionally, the images are automatically savedto the OBS directory specified by Input Dataset Path.

Deleting ImagesYou can quickly delete the images you want to discard.

On the All, Unlabeled, or Labeled tab page, select the images to be deleted orclick Select Current Page to select all images on the page, and click Delete in theupper left corner to delete the images. In the displayed dialog box, select ordeselect Delete source files as required. After confirmation, click OK to delete theimages.

If a tick is displayed in the upper right corner of an image, the image is selected. Ifno image is selected on the page, the Delete button is unavailable.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 27

Page 33: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

NO TICE

If you select Delete source files, images stored in the corresponding OBS directorywill be deleted when you delete the selected images. Deleting source files mayaffect other dataset versions or datasets using those files. As a result, the pagedisplay, training, or inference is abnormal. Deleted data cannot be recovered.Exercise caution when performing this operation.

2.4.2 Object DetectionModel training uses a large number of labeled images. Therefore, before themodel training, add labels to the images that are not labeled. You can add labelsto images by manual labeling or auto labeling. Additionally, you can modify thelabels of images, or remove their labels and label the images again.

Before labeling an image in object detection scenarios, you need to understandthe following:

● All target objects in the image must be labeled.

● Target objects are clear without any blocking and contained within labelingboxes.

● A target object must be entirely contained within a labeling box. The targetobject cannot exceed the labeling box and no gaps can be left between thebox edges and the target object. Otherwise, the background may affect modeltraining.

Start Labeling1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. In the dataset list, select the dataset to be labeled based on the labeling type,and click the dataset name to go to the Dashboard tab page of the dataset.

By default, the Dashboard tab page of the current dataset version isdisplayed. If you need to label the dataset of another version, click theManage Version tab and then click Set to Current Version in the right pane.For details, see Managing Dataset Versions.

3. On the Dashboard page of the dataset, click Label in the upper right corner.The dataset details page is displayed. By default, all data of the dataset isdisplayed on the dataset details page.

Synchronizing the Data Source

ModelArts automatically synchronizes data and labeling information from InputDataset Path to the dataset details page.

To quickly obtain the latest data in the OBS bucket, click Synchronize DataSource on the All or Unlabeled tab page of the dataset details page to add thedata uploaded using OBS to the dataset.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 28

Page 34: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Filtering DataOn the Dashboard tab page of the dataset, the summary of the dataset isdisplayed by default. In the upper left corner of the page, click Label. The datasetdetails page is displayed, showing all data in the dataset by default. On the All,Unlabeled, or Labeled tab page, you can add filter criteria in the filter criteriaarea to quickly filter out data you want to view.

The following filter criteria are supported. You can set one or more filter criteria.

● Example Type: Select Hard example or Non-hard example.● Label: Select All or one or more labels you specified.● Sample Creation Time: Select Within 1 month, Within 1 day, or Custom to

customize a time range.● File Name or Path: Filter files by file name or file storage path.● Labeled By: Select the name of the user who performs the labeling operation.● Sample Attribute: Select the attribute generated by auto grouping.

Figure 2-11 Setting filtering criteria

Labeling Images (Manually)The dataset details page provides the Labeled and Unlabeled tabs. TheUnlabeled tab page is displayed by default.

1. On the Unlabeled tab page, click an image. The image labeling page isdisplayed. For details about how to use common buttons on the Labeled tabpage, see Table 2-11.

2. In the left tool bar, select a proper labeling shape. The default labeling shapeis a rectangle. In this example, the rectangle is used for labeling.

On the left of the page, multiple tools are provided for you to label images. However,you can use only one tool at a time.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 29

Page 35: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Table 2-10 Supported labeling box

Button Description

Rectangle. Move the mouse to click the edge of the upperleft corner of the object to be labeled. A rectangle will bedisplayed. Drag the rectangle to cover the object and clickto label the object.

Polygon. In the area where the object to be labeled islocated, click to label a point, move the mouse to specifymultiple points along the edge of the object shape, andthen click the first point. All the points form a polygon.Therefore, the object to be labeled is in the labeling box.

Circle. Click the center point of an object, and move themouse to draw a circle to cover the object and click tolabel the object.

Straight line. Click to specify the start and end points of anobject, and move the mouse to draw a straight line tocover the object and click to label the object.

Dotted line. Click to specify the start and end points of anobject, and move the mouse to draw a dotted line to coverthe object and click to label the object.

Point. Click the object in an image to label a point.

3. In the Label text box, enter a new label name and click Add. Alternatively,

select an existing label from the drop-down list.

Figure 2-12 Adding an object detection label

4. Click Back to Data Labeling Preview in the upper part of the page to viewthe labeling information. In the dialog box that is displayed, click OK to savethe labeling settings.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 30

Page 36: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

5. The selected image is automatically moved to the Labeled tab page. On theUnlabeled and All tab pages, the labeling information is updated along withthe labeling process, including the added label names and the number ofimages corresponding to each label.

Table 2-11 Common icons on the labeling page

Button Description

Cancel the previous operation.

Redo the previous operation.

Zoom in an image.

Zoom out an image.

Delete all labeling boxes on the current image.

Display or hide a labeling box. You can perform thisoperation only on a labeled image.

Drag a labeling box to another position or drag the edge ofthe labeling box to resize it.

Reset. After dragging a labeling box, you can click this buttonto quickly restore the shape and position of the labeling boxto the original ones.

Display the labeled image in full screen.

Confirming Hard ExamplesOn the dataset details page, select one or more images to be labeled as hardexamples, and click Confirm Hard Example. Then, the selected images carry thehard example attribute. You can filter data that belongs to hard examples.

Viewing Labeled ImagesOn the dataset details page, click the Labeled tab to view the list of the labeledimages. You can click an image to view the label information about the image inthe All Labels area on the right.

Modifying Labeling InformationAfter labeling data, you can modify labeled data on the Labeled tab page.

● Modifying based on imagesOn the dataset details page, click the Labeled tab, select the images to bemodified, and click the images. The labeling page is displayed. Modify theimage information in the label information area on the right.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 31

Page 37: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

– Modifying a label: In the Label Image area, click , enter the correct

label name in the text box, and click to complete the modification.

– Deleting a label: In the Label Image area, click to delete a labelfrom the image.After the label is deleted, click Save and Back in the upper right cornerof the page to exit the labeling page. The image will be returned to theUnlabeled tab page.

Figure 2-13 Editing an object detection label

● Modifying based on labelsOn the dataset details page, click the Labeled tab. The information about alllabels is displayed on the right.

Figure 2-14 All labels of object detection

– Modifying a label: Click in the Operation column. In the dialog boxthat is displayed, enter the new label name and click OK. After themodification, the images that have been added with the label use thenew label name.

– Deleting a label: Click in the Operation column to delete a label.

Adding Images

In addition to automatically synchronizing data from Input Dataset Path, you candirectly add images on ModelArts for data labeling.

1. On the dataset details page, click the All or Unlabeled tab. Then click Add.2. On the Add page that is displayed, click Add Image.

Select one or more images to be uploaded in the local environment. Onlyimages in JPG, JPEG, PNG, and BMP formats are supported. The total size ofall images uploaded at a time cannot exceed 8 MB.After the images are selected, their thumbnails and sizes are displayed on theAdd page.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 32

Page 38: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-15 Adding images

3. On the Add page, click OK.The images you have added will be automatically displayed in the image liston the Unlabeled tab page. Additionally, the images are automatically savedto the OBS directory specified by Input Dataset Path.

Deleting ImagesYou can quickly delete the images you want to discard.

On the All, Unlabeled, or Labeled tab page, select the images to be deleted orclick Select Current Page to select all images on the page, and click Delete in theupper left corner to delete the images. In the displayed dialog box, select ordeselect Delete source files as required. After confirmation, click OK to delete theimages.

If a tick is displayed in the upper right corner of an image, the image is selected. Ifno image is selected on the page, the Delete button is unavailable.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 33

Page 39: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

NO TICE

If you select Delete source files, images stored in the corresponding OBS directorywill be deleted when you delete the selected images. Deleting source files mayaffect other dataset versions or datasets using those files. As a result, the pagedisplay, training, or inference is abnormal. Deleted data cannot be recovered.Exercise caution when performing this operation.

2.4.3 Text ClassificationModel training requires a large amount of labeled data. Therefore, before themodel training, add labels to the files that are not labeled. Additionally, you canmodify, delete, and re-label the labeled text.

Text classification classifies text content based on labels. Before labeling textcontent, you need to understand the following:

● Text labeling supports multiple labels. That is, you can add multiple labels toa labeling object.

● A label name can contain a maximum of 32 characters, including Chinesecharacters, uppercase letters, lowercase letters, digits, hyphens (-), andunderscores (_).

Start Labeling1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. In the dataset list, select the dataset to be labeled based on the labeling type,and click the dataset name to go to the Dashboard tab page of the dataset.

By default, the Dashboard tab page of the current dataset version isdisplayed. If you need to label the dataset of another version, click theManage Version tab and then click Set to Current Version in the right pane.For details, see Managing Dataset Versions.

3. On the Dashboard page of the dataset, click Label in the upper right corner.The dataset details page is displayed. By default, all data of the dataset isdisplayed on the dataset details page.

Labeling Content

The dataset details page displays the labeled and unlabeled text files in thedataset. The Unlabeled tab page is displayed by default.

1. On the Unlabeled tab page, the labeling objects are listed in the left pane. Inthe list, click the text object to be labeled, and select a label in the Label Setarea in the right pane. Multiple labels can be added to a labeling object.

By analogy, an object is selected continuously, and a label is added to theobject.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 34

Page 40: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-16 Labeling for text classification

2. After all objects are labeled, click Save Current Page at the bottom of thepage to complete labeling text files on the Unlabeled tab page.

Adding Labels

● Adding labels on the Unlabeled tab page: Click next to Label Set. On theAdd Label page that is displayed, add a label name, select a label color, andclick OK.

Figure 2-17 Adding labels (1)

● Adding labels on the Labeled tab page: Click next to All Labels. On theAdd Label page that is displayed, add a label name, select a label color, andclick OK.

Figure 2-18 Adding labels (2)

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 35

Page 41: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-19 Adding labels

Viewing the Labeled Text

On the dataset details page, click the Labeled tab to view the list of the labeledtext. You can also view all labels supported by the dataset in the All Labels areaon the right.

Modifying Labeled Data

After labeling data, you can modify labeled data on the Labeled tab page.

● Modifying based on textsOn the dataset details page, click the Labeled tab, and select the text to bemodified from the text list.In the text list, click the text. When the text background turns blue, the text isselected. If a text file has multiple labels, you can click above a label todelete the label.

● Modifying based on labelsOn the dataset details page, click the Labeled tab. The information about alllabels is displayed on the right.

– Batch modification: In the All Labels area, click in the Operationcolumn, modify a label name in the text box, select a label color, andclick OK.

– Batch deletion: In the All Labels area, click in the Operation columnto delete the label. In the dialog box that is displayed, select Delete labelor Delete label and objects with only the label, and click OK.

Adding a File

In addition to automatically synchronizing data from Input Dataset Path, you candirectly add text files on ModelArts for data labeling.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 36

Page 42: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

1. On the dataset details page, click the Unlabeled tab. Then click Add File.2. In the Add File dialog box that is displayed, select the files to be uploaded.

Select one or more files to be uploaded in the local environment. Only .txtand .csv files are supported. The total size of files uploaded at a time cannotexceed 8 MB.

Figure 2-20 Adding a file

3. In the Add File dialog box, click Upload. The files you add will beautomatically displayed on the Unlabeled tab page.

Deleting a File

You can quickly delete the files you want to discard.

● On the Unlabeled tab page, select the text to be deleted, and click Delete inthe upper left corner to delete the text.

● On the Labeled tab page, select the text to be deleted and click Delete.Alternatively, you can tick Select Current Page to select all text on thecurrent page and click Delete in the upper left corner.

The background of the selected text is blue. If no text is selected on the page, theDelete button is unavailable.

2.4.4 Named Entity RecognitionNamed entity recognition assigns labels to named entities in text, such as timeand locations. Before labeling, you need to understand the following:

● A label name can contain a maximum of 32 characters, including Chinesecharacters, uppercase letters, lowercase letters, digits, hyphens (-), andunderscores (_).

Start Labeling1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. In the dataset list, select the dataset to be labeled based on the labeling type,and click the dataset name to go to the Dashboard tab page of the dataset.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 37

Page 43: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

By default, the Dashboard tab page of the current dataset version isdisplayed. If you need to label the dataset of another version, click theManage Version tab and then click Set to Current Version in the right pane.For details, see Managing Dataset Versions.

3. On the Dashboard page of the dataset, click Label in the upper right corner.The dataset details page is displayed. By default, all data of the dataset isdisplayed on the dataset details page.

Labeling Content

The dataset details page displays the labeled and unlabeled text files in thedataset. The Unlabeled tab page is displayed by default.

1. On the Unlabeled tab page, the labeling objects are listed in the left pane. Inthe list, click the text object to be labeled, select a part of text displayedunder Label Set for labeling, and select a label in the Label Set area in theright pane. Multiple labels can be added to a labeling object.By analogy, an object is selected continuously, and a label is added to theobject.

Figure 2-21 Labeling for named entity recognition

2. Click Save Current Page in the lower part of the page to complete thelabeling.

Adding Labels

● Adding labels on the Unlabeled tab page: Click next to Label Set. On theAdd Label page that is displayed, add a label name, select a label color, andclick OK.

Figure 2-22 Adding a named entity label (1)

● Adding labels on the Labeled tab page: Click next to All Labels. On theAdd Label page that is displayed, add a label name, select a label color, andclick OK.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 38

Page 44: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-23 Adding a named entity label (2)

Figure 2-24 Adding a named entity label

Viewing the Labeled Text

On the dataset details page, click the Labeled tab to view the list of the labeledtext. You can also view all labels supported by the dataset in the All Labels areaon the right.

Modifying Labeled Data

After labeling data, you can modify labeled data on the Labeled tab page.

On the dataset details page, click the Labeled tab, and modify the textinformation in the label information area on the right.

● Modifying based on texts

On the dataset details page, click the Labeled tab, and select the text to bemodified from the text list.

Manual deletion: In the text list, click the text. When the text backgroundturns blue, the text is selected. On the right of the page, click above a textlabel to delete the label.

● Modifying based on labels

On the dataset details page, click the Labeled tab. The information about alllabels is displayed on the right.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 39

Page 45: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

– Batch modification: In the All Labels area, click in the Operationcolumn, add a label name in the text box, select a label color, and clickOK.

– Batch deletion: In the All Labels area, click in the Operation columnto delete the label. In the dialog box that is displayed, select Delete labelor Delete label and objects with only the label, and click OK.

Adding a FileIn addition to automatically synchronizing data from Input Dataset Path, you candirectly add text files on ModelArts for data labeling.

1. On the dataset details page, click the Unlabeled tab. Then click Add File.2. In the Add File dialog box that is displayed, select the files to be uploaded.

Select one or more files to be uploaded in the local environment. Only .txtand .csv files are supported. The total size of files uploaded at a time cannotexceed 8 MB.

Figure 2-25 Adding a file

3. In the Add File dialog box, click Upload. The files you add will beautomatically displayed on the Unlabeled tab page.

Deleting a FileYou can quickly delete the files you want to discard.

● On the Unlabeled tab page, select the text to be deleted, and click Delete inthe upper left corner to delete the text.

● On the Labeled tab page, select the text to be deleted and click Delete.Alternatively, you can tick Select Current Page to select all text on thecurrent page and click Delete in the upper left corner.

The background of the selected text is blue. If no text is selected on the page, theDelete button is unavailable.

2.4.5 Sound ClassificationModel training requires a large amount of labeled data. Therefore, before themodel training, label the unlabeled audio files. ModelArts enables you to label

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 40

Page 46: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

audio files in batches by one click. Additionally, you can modify the labels of audiofiles, or remove their labels and label the audio files again.

Start Labeling1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. In the dataset list, select the dataset to be labeled based on the labeling type,and click the dataset name to go to the Dashboard tab page of the dataset.By default, the Dashboard tab page of the current dataset version isdisplayed. If you need to label the dataset of another version, click theManage Version tab and then click Set to Current Version in the right pane.For details, see Managing Dataset Versions.

3. On the Dashboard page of the dataset, click Label in the upper right corner.The dataset details page is displayed. By default, all data of the dataset isdisplayed on the dataset details page.

Synchronizing the Data SourceModelArts automatically synchronizes data and labeling information from InputDataset Path to the dataset details page.

To quickly obtain the latest data in the OBS bucket, click Synchronize DataSource on the All or Unlabeled tab page of the dataset details page to add thedata uploaded using OBS to the dataset.

Labeling Audio FilesThe dataset details page displays the labeled and unlabeled audio files. The

Unlabeled tab page is displayed by default. Click on the left of the audio topreview the audio.

1. On the Unlabeled tab page, select the audio files to be labeled.– Manual selection: In the audio list, click the target audio. If the blue

check box is displayed in the upper right corner, the audio is selected. Youcan select multiple audio files of the same type and label them together.

– Batch selection: If all audio files of the current page belong to one type,you can click Select Current Page in the upper right corner of the list toselect all the audio files on the page.

2. Add labels.

a. In the right pane, set a label name in the Label text box.Method 1 (the required label already exists): In the right pane, select ashortcut from the Shortcut drop-down list, select an existing label namefrom the Label text box, and click OK.Method 2 (adding a label): In the right pane, select a shortcut from theShortcut drop-down list, and enter a new label name in the Label textbox.

b. The selected audio files are automatically moved to the Labeled tabpage. On the Unlabeled tab page, the labeling information is updated

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 41

Page 47: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

along with the labeling process, including the added label names and thenumber of audio files corresponding to each label.

Shortcut key description: After specifying a shortcut key for a label, you can select anaudio file and press the shortcut key to add a label for the audio file. Example: Specify1 as the shortcut key for the aa label. Select one or more files and press 1 during datalabeling. A message is displayed, asking you whether to label the files with aa. ClickOK.

A shortcut key corresponds to a label, and a label corresponds to a shortcut key. Ashortcut key cannot be specified for different labels. Shortcut keys can greatly improvethe labeling efficiency.

Figure 2-26 Adding audio labels

Viewing the Labeled Audio Files

On the dataset details page, click the Labeled tab to view the list of the labeledaudio files. Click an audio file. You can view the label information about the audiofile in the File Labels area on the right.

Modifying Labels

After labeling data, you can modify labeled data on the Labeled tab page.

● Modifying based on audioOn the data labeling page, click the Labeled tab. Select one or more audiofiles to be modified from the audio list. Modify the label in the label detailsarea on the right.

– Modifying a label: In the File Labels area, click in the Operation

column, enter the correct label name in the text box, and click tocomplete the modification.

Figure 2-27 Editing audio labels

– Deleting a label: In the File Labels area, click in the Operationcolumn to delete the label.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 42

Page 48: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

● Modifying based on labelsOn the dataset details page, click the Labeled tab. The information about alllabels is displayed on the right.

Figure 2-28 Information about all labels

– Modifying a label: Click in the Operation column. In the dialog boxthat is displayed, enter the new label name and click OK. After themodification, the new label applies to the audio files that contain theoriginal label.

– Deleting a label: Click in the Operation column. In the displayeddialog box, select Delete label, Delete label and images with only thelabel (Do not delete source files), or Delete label and images withonly the label (Delete source files), and click OK.

Adding Audio FilesIn addition to automatically synchronizing data from Input Dataset Path, you candirectly add audio files on ModelArts for data labeling.

1. On the dataset details page, click the Unlabeled tab. Then click Add Audio inthe upper left corner.

2. In the Add Audio dialog box that is displayed, click Add Audio.Select the audio files to be uploaded in the local environment. Only WAVaudio files are supported. The size of an audio file cannot exceed 4 MB. Thetotal size of audio files uploaded at a time cannot exceed 8 MB.

3. In the Add Audio dialog box, click OK.The audio files you add will be automatically displayed on the Unlabeled tabpage. Additionally, the audio files are automatically saved to the OBSdirectory specified by Input Dataset Path.

Deleting Audio FilesYou can quickly delete the audio files you want to discard.

On the Unlabeled or Labeled tab page, select the audio files to be deleted one byone or click Select Current Page to select all audio files on the page, and thenclick Delete File in the upper left corner. In the displayed dialog box, select ordeselect Delete source files as required. After confirmation, click OK to delete theaudio files.

If a tick is displayed in the upper right corner of an audio file, the audio file isselected. If no audio file is selected on the page, the Delete File button isunavailable.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 43

Page 49: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

NO TICE

If you select Delete source files, audio files stored in the corresponding OBSdirectory will be deleted when you delete the selected audio files. Deleting sourcefiles may affect other dataset versions or datasets using those files. As a result, thepage display, training, or inference is abnormal. Deleted data cannot be recovered.Exercise caution when performing this operation.

2.4.6 Speech LabelingModel training requires a large amount of labeled data. Therefore, before themodel training, label the unlabeled audio files. ModelArts enables you to labelaudio files in batches by one click. Additionally, you can modify the labels of audiofiles, or remove their labels and label the audio files again.

Start Labeling1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. In the dataset list, select the dataset to be labeled based on the labeling type,and click the dataset name to go to the Dashboard tab page of the dataset.By default, the Dashboard tab page of the current dataset version isdisplayed. If you need to label the dataset of another version, click theManage Version tab and then click Set to Current Version in the right pane.For details, see Managing Dataset Versions.

3. On the Dashboard page of the dataset, click Label in the upper right corner.The dataset details page is displayed. By default, all data of the dataset isdisplayed on the dataset details page.

Synchronizing the Data Source

ModelArts automatically synchronizes data and labeling information from InputDataset Path to the dataset details page.

To quickly obtain the latest data in the OBS bucket, click Synchronize DataSource on the All or Unlabeled tab page of the dataset details page to add thedata uploaded using OBS to the dataset.

Labeling Audio Files

The dataset details page displays the labeled and unlabeled audio files. TheUnlabeled tab page is displayed by default.

1. In the audio file list on the Unlabeled tab page, click the target audio file. In

the area on the right, the audio file is displayed. Click below the audiofile to play the audio.

2. In Speech Content, enter the speech content.3. After entering the content, click OK to complete the labeling. The audio file is

automatically moved to the Labeled tab page.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 44

Page 50: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-29 Labeling an audio file

Viewing the Labeled Audio Files

On the dataset details page, click the Labeled tab to view the list of the labeledaudio files. Click the audio file to view the audio content in the Speech Contenttext box on the right.

Modifying Labeled Data

After labeling data, you can modify labeled data on the Labeled tab page.

On the data labeling page, click the Labeled tab, and select the audio file to bemodified from the audio file list. In the label information area on the right, modifythe content of the Speech Content text box, and click OK to complete themodification.

Adding Audio Files

In addition to automatically synchronizing data from Input Dataset Path, you candirectly add audio files on ModelArts for data labeling.

1. On the dataset details page, click the Unlabeled tab. Then click Add Audio inthe upper left corner.

2. In the Add Audio dialog box that is displayed, click Add Audio.

Select the audio files to be uploaded in the local environment. Only WAVaudio files are supported. The size of an audio file cannot exceed 4 MB. Thetotal size of audio files uploaded at a time cannot exceed 8 MB.

3. In the Add Audio dialog box, click OK.

The audio files you add will be automatically displayed on the Unlabeled tabpage. Additionally, the audio files are automatically saved to the OBSdirectory specified by Input Dataset Path.

Deleting Audio Files

You can quickly delete the audio files you want to discard.

On the Unlabeled or Labeled tab page, select the audio files to be deleted, andthen click Delete File in the upper left corner. In the displayed dialog box, selector deselect Delete source files as required. After confirmation, click OK to deletethe audio files.

If no audio file is selected on the page, the Delete File button is unavailable.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 45

Page 51: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

NO TICE

If you select Delete source files, audio files stored in the corresponding OBSdirectory will be deleted when you delete the selected audio files. Deleting sourcefiles may affect other dataset versions or datasets using those files. As a result, thepage display, training, or inference is abnormal. Deleted data cannot be recovered.Exercise caution when performing this operation.

2.4.7 Speech Paragraph LabelingModel training requires a large amount of labeled data. Therefore, before themodel training, label the unlabeled audio files. ModelArts enables you to labelaudio files. Additionally, you can modify the labels of audio files, or remove theirlabels and label the audio files again.

Start Labeling1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. In the dataset list, select the dataset to be labeled based on the labeling type,and click the dataset name to go to the Dashboard tab page of the dataset.

By default, the Dashboard tab page of the current dataset version isdisplayed. If you need to label the dataset of another version, click theManage Version tab and then click Set to Current Version in the right pane.For details, see Managing Dataset Versions.

3. On the Dashboard page of the dataset, click Label in the upper right corner.The dataset details page is displayed. By default, all data of the dataset isdisplayed on the dataset details page.

Synchronizing the Data Source

ModelArts automatically synchronizes data and labeling information from InputDataset Path to the dataset details page.

To quickly obtain the latest data in the OBS bucket, click Synchronize DataSource on the All or Unlabeled tab page of the dataset details page to add thedata uploaded using OBS to the dataset.

Labeling Audio Files

The dataset details page displays the labeled and unlabeled audio files. TheUnlabeled tab page is displayed by default.

1. In the audio file list on the Unlabeled tab page, click the target audio file. In

the area on the right, the audio file is displayed. Click below the audiofile to play the audio.

2. Select an audio segment based on the content being played, and enter theaudio file label and content in the Speech Content text box.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 46

Page 52: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-30 Labeling an audio file

3. After entering the content, click OK to complete the labeling. The audio file isautomatically moved to the Labeled tab page.

Viewing the Labeled Audio FilesOn the dataset details page, click the Labeled tab to view the list of the labeledaudio files. Click the audio file to view the audio content in the Speech Contenttext box on the right.

Modifying Labeled DataAfter labeling data, you can modify labeled data on the Labeled tab page.

● Modifying a label: On the dataset details page, click the Labeled tab, andselect the audio file to be modified from the audio file list. In the SpeechContent area, modify Label and Content, and click OK to complete themodification.

● Deleting a label: Click in the Operation column of the target number todelete the label of the audio segment. Alternatively, you can click the cross(x) icon above the labeled audio file to delete the label. Then click OK.

Adding Audio FilesIn addition to automatically synchronizing data from Input Dataset Path, you candirectly add audio files on ModelArts for data labeling.

1. On the dataset details page, click the Unlabeled tab. Then click Add Audio inthe upper left corner.

2. In the Add Audio dialog box that is displayed, click Add Audio.Select the audio files to be uploaded in the local environment. Only WAVaudio files are supported. The size of an audio file cannot exceed 4 MB. Thetotal size of audio files uploaded at a time cannot exceed 8 MB.

3. In the Add Audio dialog box, click OK.The audio files you add will be automatically displayed on the Unlabeled tabpage. Additionally, the audio files are automatically saved to the OBSdirectory specified by Input Dataset Path.

Deleting Audio FilesYou can quickly delete the audio files you want to discard.

On the Unlabeled or Labeled tab page, select the audio files to be deleted, andthen click Delete File in the upper left corner. In the displayed dialog box, select

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 47

Page 53: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

or deselect Delete source files as required. After confirmation, click OK to deletethe audio files.

If no audio file is selected on the page, the Delete File button is unavailable.

NO TICE

If you select Delete source files, audio files stored in the corresponding OBSdirectory will be deleted when you delete the selected audio files. Deleting sourcefiles may affect other dataset versions or datasets using those files. As a result, thepage display, training, or inference is abnormal. Deleted data cannot be recovered.Exercise caution when performing this operation.

2.5 Publishing a DatasetModelArts distinguishes data of the same source according to versions labeled atdifferent points in time, which facilitates the selection of dataset versions duringsubsequent model building and development. After labeling the data, you canpublish the dataset to generate a new dataset version.

About Dataset Versions● For a newly created dataset (before publishing), there is no dataset version

information. The dataset must be published before being used for modeldevelopment or training.

● The default naming rules of dataset versions are V001 and V002 in ascendingorder. You can customize the version number during publishing.

● You can set any version to the current directory. Then the details of theversion are displayed on the dataset details page.

● You can obtain the dataset in the manifest file format corresponding to eachdataset version based on the value of Storage Path. The dataset can be usedwhen you import data or filter hard examples.

Publishing a Dataset1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. In the dataset list, click Publish in the Operation column.Alternatively, you can click the dataset name to go to the Dashboard tabpage of the dataset, and click Publish in the upper right corner.

3. In the displayed Publish New Version dialog box, set Version Name andFormat and click OK.Version Name: The naming rules of V001 and V002 in ascending order areused by default. A version name can be customized. Only letters, digits,hyphens (-), and underscores (_) are allowed.Format: The Default and CarbonData options are supported.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 48

Page 54: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-31 Publishing a dataset

After the version is published, you can go to the Version Manager tab pageto view the detailed information. By default, the system sets the latest versionto the current directory.

Directory Structure of Related Files After the Dataset Is Published

Datasets are managed based on OBS directories. After a new version is published,the directory is generated based on the new version in the output dataset path.

Take an image classification dataset as an example. After the dataset is published,the directory structure of related files generated in OBS is as follows:

|-- user-specified-output-path |-- DatasetName-datasetId |-- annotation |-- VersionMame1 |-- VersionMame1.manifest |-- VersionMame2 ... |-- ...

The following uses object detection as an example. If a manifest file is imported tothe dataset, the following provides the directory structure of related files after thedataset is published:

|-- user-specified-output-path |-- DatasetName-datasetId |-- annotation |-- VersionMame1 |-- VersionMame1.manifest |-- annotation |-- file1.xml |-- VersionMame2 ... |-- ...

2.6 Managing Dataset VersionsAfter labeling data, you can publish the dataset to multiple versions formanagement. For the published versions, you can view the dataset version

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 49

Page 55: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

updates, set the current version, and delete versions. For details about datasetversions, see About Dataset Versions.

For details about how to publish a new version, see Publishing a Dataset.

Viewing Dataset Version Updates1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. In the dataset list, choose More > Manage Version in the Operation column.The Manage Version tab page is displayed.You can view basic information about the dataset, and view the version andrelease time on the left.

Figure 2-32 Viewing dataset versions

Setting to Current Version1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. In the dataset list, choose More > Manage Version in the Operation column.The Manage Version tab page is displayed.

3. On the Manage Version tab page, select the desired dataset version, andclick Set to Current Version in the basic information area on the right side.After the setting is completed, Current version is displayed on the right ofthe version name.

Only the version in Normal status can be set to the current version.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 50

Page 56: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-33 Setting to current version

Deleting a Dataset Version1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. In the dataset list, choose More > Manage Version in the Operation column.The Manage Version tab page is displayed.

3. Locate the row that contains the target version, and click Delete in theOperation column. In the dialog box that is displayed, click OK.

Deleting a dataset version does not remove the original data. Data and its labelinginformation are still stored in the OBS directory. However, if it is deleted, you cannotmanage the dataset versions on the ModelArts management console. Exercise cautionwhen performing this operation.

2.7 Modifying a DatasetFor a created dataset, you can modify its basic information to match servicechanges.

Prerequisites

You have created a dataset.

Modifying the Basic Information About a Dataset1. Log in to the ModelArts management console. In the left navigation pane,

choose Data Management (Beta) > Datasets. The Datasets page isdisplayed.

2. In the dataset list, choose More > Modify in the Operation column.

Alternatively, you can click the dataset name to go to the Dashboard tabpage of the dataset, and click Modify in the upper right corner.

3. Modify basic information about the dataset by referring to Table 2-12 andthen click OK.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 51

Page 57: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

● For datasets of the Object detection type, team labeling cannot be disabled afterbeing enabled.

● For object detection, labels with label attributes cannot be modified but can bedeleted.

Figure 2-34 Modifying a dataset

Table 2-12 Parameter description

Parameter Description

Name Enter the name of the dataset. A dataset name cancontain only letters, digits, underscores (_), andhyphens (-).

Description Enter a brief description for the dataset.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 52

Page 58: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Label Set Label Set modifications only apply to the datasets ofthe Image classification, Object detection, Soundclassification, Named entity recognition, Textclassification, and Text triplet types. Label Colormodifications only apply to the datasets of the Namedentity recognition, Object detection, Textclassification, and Text triplet types.● Change the label name: You can enter a label

name in the Label Name text box. The label namecan contain only Chinese characters, letters, digits,underscores (_), and hyphens (-). The namecontains 1 to 32 characters.

● Add Label Attribute: Label attributes can be addedto the datasets of the object detection type.

● Add Label: Click to add a label.

● Label Color: Click and select a color from thecolor palette shown in the following figure, or enterthe hexadecimal color code to set the color.

2.8 Team Labeling

2.8.1 Team Labeling OverviewGenerally, a small data labeling task can be completed by an individual. However,team work is required to label a large dataset. ModelArts provides the teamlabeling function. A labeling team can be formed to manage labeling for the samedataset.

Currently, the team labeling function is only available for datasets whose labeling types areimage classification and object detection.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 53

Page 59: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

How to Enable Team Labeling● When creating a dataset, enable Team Labeling and select a team or task

manager.

Figure 2-35 Enabling during dataset creation

● If team labeling is not enabled for a dataset that has been created, create ateam labeling task to enable team labeling. For details about how to create ateam labeling task, see Creating Team Labeling Tasks.

Figure 2-36 Creating a team labeling task in a dataset list

Figure 2-37 Creating a team labeling task on the dataset details page

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 54

Page 60: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-38 Creating a team labeling task on the dataset details page

Operations Related to Team Labeling● Team Management

● Member Management

● Managing Team Labeling Tasks

2.8.2 Team ManagementTeam labeling is managed in a unit of teams. To enable team labeling for adataset, a team must be specified. Multiple members can be added to a team.

Background● An account can have a maximum of 10 teams.

● An account must have at least one team to enable team labeling for datasets.If the account has no team, add a team by referring to Adding a Team.

Adding a Team1. In the left navigation pane of the ModelArts management console, choose

Data Management (Beta) > Labeling Team. The Labeling Team page isdisplayed.

2. On the Labeling Team page, click Add Team.

3. In the displayed Add Team dialog box, enter a team name and descriptionand click OK. The labeling team is added.

Figure 2-39 Adding a team

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 55

Page 61: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

The new team is displayed on the Labeling Team page. You can view teamdetails in the right pane. There is no member in the new team. Add membersto the new team by referring to Adding a Member.

Deleting a TeamYou can delete a team that is no longer used.

On the Labeling Team page, select the target team and click Delete. In the dialogbox that is displayed, click OK.

Figure 2-40 Deleting a team

2.8.3 Member ManagementThere is no member in a new team. You need to add members who willparticipate in a team labeling task.

A maximum of 100 members can be added to a team. If there are more than 100members, you are advised to add them to different teams for better management.

Adding a Member1. In the left navigation pane of the ModelArts management console, choose

Data Management (Beta) > Labeling Team. The Labeling Team page isdisplayed.

2. On the Labeling Team page, select a team from the team list on the left andclick a team name. The team details are displayed in the right pane.

3. In the Team Details area, click Add Member.4. In the displayed Add Member dialog box, enter an email address, description,

and a role for a member and click OK.An email address uniquely identifies a team member. Different memberscannot use the same email address. The email address you enter will berecorded and saved in ModelArts. It is used only for ModelArts team labeling.After a member is deleted, the email address will also be deleted.Possible values of Role are Labeler, Reviewer, and Team Manager. Only oneTeam Manager can be set.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 56

Page 62: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-41 Adding a member

Information about the added member is displayed in the Team Details area.

Modifying Member InformationYou can modify member information if it is changed.

1. In the Team Details area, select the desired member.2. In the row containing the desired member, click Modify in the Operation

column. In the displayed dialog box, modify the description or role.The email address of a member cannot be changed. To change the emailaddress of a member, you are advised to delete the member, and set a newemail address when adding a member.Possible values of Role are Labeler, Reviewer, and Team Manager. Only oneTeam Manager can be set.

Deleting Members● Deleting a single member

In the Team Details area, select the desired member, and click Delete in theOperation column. In the dialog box that is displayed, click OK.

● Batch DeletionIn the Team Details area, select members to be deleted and click Delete. Inthe dialog box that is displayed, click OK.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 57

Page 63: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-42 Batch deletion

2.8.4 Managing Team Labeling TasksFor datasets with team labeling enabled, you can create team labeling tasks andassign the labeling tasks to different teams so that team members can completethe labeling tasks together. During data labeling, members can initiateacceptance, continue acceptance, and view acceptance reports.

Creating Team Labeling Tasks

If you enable team labeling when creating a dataset and assign a team to labelthe dataset, the system creates a labeling task based on the team by default. Afterthe dataset is created, you can view the labeling task on the Labeling Progresstab page of the dataset.

You can also create a team marking task and assign it to different members in thesame team or to other labeling teams.

1. Log in to the ModelArts management console. In the left navigation pane,choose Data Management (Beta) > Datasets. A dataset list is displayed.

2. In the dataset list, select a dataset that supports team labeling, and click thedataset name to go to the Dashboard tab page of the dataset.

3. Click the Labeling Progress tab to view existing labeling tasks of the dataset.Click Create Team Labeling Task in the upper right corner to create a task.

Figure 2-43 Labeling tasks

4. In the displayed Create Team Labeling Task dialog box, set relatedparameters and click OK.– Name: Enter a task name.– Type: Select a task type, Team or Task Manager.– Select Team: If Type is set to Team, you need to select a team and

members for labeling. The Select Team drop-down list box lists thelabeling teams and members created by the current account. For detailsabout team management, see Team Labeling Overview.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 58

Page 64: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

– Select Task Manager: If Type is set to Task Manager, you need to selectone Team Manager member from all teams as the task manager.

– Label Set: All existing labels and label attributes of the dataset aredisplayed. You can also select Automatically synchronize new imagesto the team labeling task or Automatically load the intelligentlabeling results to images that need to be labeled under Label Set.

Figure 2-44 Creating team labeling tasks

After the task is created, you can view the new task on the LabelingProgress tab page.

Task Acceptance● Initiating acceptance

After team members complete data labeling, the dataset creator can initiateacceptance to check labeling results.

a. On the Labeling Progress tab page, click Initiate Acceptance to accepttasks.

b. In the displayed dialog box, set Sample Policy to By percentage or Byquantity. Click OK to start the acceptance.By percentage: Sampling is performed based on a percentage foracceptance.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 59

Page 65: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

By quantity: Sampling is performed based on quantity for acceptance.

Figure 2-45 Initiating acceptance

c. After the acceptance is initiated, an acceptance report is displayed on theconsole in real time. In the Acceptance Result area on the right, selectPass or Reject.If you select Pass, set Rating to A, B, C, or D. Option A indicates thehighest score. See Figure 2-47. If you select Reject, enter your rejectionreasons in the text box. See Figure 2-48.

Figure 2-46 Viewing a real-time acceptance report

Figure 2-47 Pass

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 60

Page 66: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-48 Reject

● Continuing acceptance

You can continue accepting tasks whose acceptance is not completed. Fortasks for which an acceptance process is not initiated, the ContinueAcceptance button is unavailable.

On the Labeling Progress tab page, click Continue Acceptance to continueaccepting tasks. The Real-Time Acceptance Report page is displayed. Youcan continue to accept the images that are not accepted.

● Finishing acceptance

After all images are accepted, click Finish in the upper right corner to finishthe acceptance. In the displayed Finish dialog box, check the acceptancereport. After confirming the report, click Accept. If the acceptance report doesnot meet the requirements, you can click Rejected to continue data labelingand acceptance.

Once the labeled data is accepted, team members cannot modify the labelinginformation. Only the dataset creator can modify the labeling information.

Figure 2-49 Finishing acceptance

Viewing an Acceptance Report

You can view the acceptance report of an ongoing or finished labeling task. Onthe Labeling Progress tab page, click Acceptance Report. In the displayedAcceptance Report dialog box, view report details.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 61

Page 67: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 2-50 Viewing an acceptance report

Deleting a Labeling TaskOn the Labeling Progress tab page, click Delete in the row where a labeling taskto be deleted. After a task is deleted, the labeling details that are not acceptedwill be lost. Exercise caution when performing this operation. However, theoriginal data in the dataset and the labeled data that has been accepted are stillstored in the corresponding OBS bucket.

2.9 Deleting a DatasetIf a dataset is no longer in use, you can delete it to release resources.

After the dataset is deleted, if you need to delete the data in the dataset input and outputpaths in OBS to release resources, you are advised to delete the data and the OBS folderson the OBS console.

Procedure1. In the left navigation pane, choose Data Management (Beta) > Datasets.

On the Datasets page, choose More > Delete in the Operation column ofthe dataset.

2. In the displayed dialog box, click OK.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 62

Page 68: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

After a dataset is deleted, some functions such as dataset version managementbecome unavailable. Exercise caution when performing this operation. However, theoriginal data and labeling data of the dataset are still stored in OBS.

ModelArtsUser Guide (AI Beginners) 2 Data Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 63

Page 69: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

3 Training Management

3.1 Model Training OverviewModelArts provides model training for you to view the training effect, based onwhich you can adjust your model parameters. You can select resource pools (CPUor GPU) with different instance flavors for model training. In addition to themodels developed by users, ModelArts also provides preset algorithms. You candirectly adjust parameters of the preset algorithms, instead of developing a modelby yourself, to obtain a satisfactory model.

Description of the Model Training Function

Table 3-1 Function description

Function Description Reference

Built-inalgorithms

Based on the frequently-used AIengines in the industry, ModelArtsprovides built-in algorithms to meet awide range of your requirements. Youcan directly select the algorithms fortraining jobs, without concerning modeldevelopment.

Introduction toBuilt-inAlgorithms

Training jobmanagement

You can create training jobs, managetraining job versions, and view detailsof training jobs, traceback diagrams,and evaluation details.

Creating aTraining JobManaging TrainingJob VersionsViewing JobDetails

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 64

Page 70: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Function Description Reference

Job parametermanagement

You can save the parameter settings ofa training job (including the datasource, algorithm source, runningparameters, resource pool parameters,and more) as a job parameter, whichcan be directly used when you create atraining job, eliminating the need to setparameters one by one. As such, theconfiguration efficiency can be greatlyimproved.

Managing JobParameters

Model trainingvisualization(TensorBoard)

TensorBoard is a tool that caneffectively display the computationalgraph of TensorFlow in the runningprocess, the trend of various metrics intime, and the data used in the training.Currently, TensorBoard supports onlythe training jobs based on theTensorFlow and MXNet engines.

ManagingVisualization Jobs

3.2 Built-in Algorithms

3.2.1 Introduction to Built-in AlgorithmsBased on the frequently-used AI engines in the industry, ModelArts provides built-in algorithms to meet a wide range of your requirements. You can directly selectthe algorithms for training jobs, without concerning model development.

Preset algorithms of ModelArts adopt MXNet and TensorFlow engines and aremainly used for detection of object classes and locations, image classification,semantic image segmentation, and reinforcement learning.

Viewing Built-in AlgorithmsIn the left navigation pane of the ModelArts management console, chooseTraining Management > Training Jobs. On the displayed page, click Built-inAlgorithms. In the preset algorithm list, click next to an algorithm name toview details about the algorithm.

You can click Create Training Job in the Operation column for an algorithm toquickly create a training job, for which this algorithm serves as the AlgorithmSource.

● Before using a built-in algorithm to create a training job, prepare and upload trainingdata to OBS. For details about the data storage path and data format requirements, seeRequirements for Datasets.

● The built-in algorithms hard_example_mining and feature_cluster are for internal useonly and you cannot use them for training.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 65

Page 71: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 3-1 Preset algorithm list

For details about the built-in algorithms and their running parameters, see thefollowing:

● yolo_v3● retinanet_resnet_v1_50● inception_v3● darknet_53● SegNet_VGG_BN_16● ResNet_v2_50● ResNet_v1_50● Faster_RCNN_ResNet_v2_101● Faster_RCNN_ResNet_v1_50

3.2.2 Requirements for DatasetsThe built-in algorithms provided by ModelArts can be used for imageclassification, object detection, image semantic segmentation, and reinforcementlearning. The requirements for the datasets vary according to the built-inalgorithms used for different purposes. Before using a built-in algorithm to createa training job, you are advised to prepare a dataset based on the requirements ofthe algorithm.

Image ClassificationThe training dataset must be stored in the OBS bucket. The following shows theOBS path structure of the dataset:

|-- data_url |--a.jpg |--a.txt |--b.jpg |--b.txt ...

● data_url indicates the folder name. You can customize the folder name.Images and label files cannot be stored in the root directory of an OBSbucket.

● Images and label files must have the same name. The .txt files are label filesfor image classification. The images can be in JPG, JPEG, PNG, or BMP format.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 66

Page 72: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

● The first row of label files for image classification indicates the category nameof images, which can be Chinese characters, English letters, or digits. Thefollowing provides an example of file content:cat

● In addition to the preceding files and folders, no other files or folders canexist in the data_url folder.

● You can also use the images labeled in an ExeML project to retrain a built-inimage classification or object detection algorithm to obtain a new model.

Object Class and LocationThe training dataset must be stored in the OBS bucket. The following shows theOBS path structure of the dataset:

|-- data_url |--a.jpg |--a.xml |--b.jpg |--b.xml ...

● data_url indicates the folder name. You can customize the folder name.Images and label files cannot be stored in the root directory of an OBSbucket.

● Images and label files must have the same name. The .xml files are label filesfor object detection. The images can be in JPG, JPEG, PNG, or BMP format.

● In addition to the preceding files and folders, no other files or folders canexist in the data_url folder.

● You can also use the images labeled in an ExeML project to retrain a built-inimage classification or object detection algorithm to obtain a new model.

● The following provides a label file for object detection. The key parametersare size (image size), object (object information), and name (label name,which can be Chinese characters, English letters, or digits). Note that thevalues of xmin, ymin, xmax, and ymax in the bndbox field cannot exceed thevalue of size. That is, the value of min cannot be less than 0, and the value ofmax cannot be greater than the value of width or height.<?xml version="1.0" encoding="UTF-8" standalone="no"?><annotation> <folder>Images</folder> <filename>IMG_20180919_120022.jpg</filename> <source> <database>Unknown</database> </source> <size> <width>800</width> <height>600</height> <depth>1</depth> </size> <segmented>0</segmented> <object> <name>yunbao</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <bndbox> <xmin>216.00</xmin> <ymin>108.00</ymin> <xmax>705.00</xmax> <ymax>488.00</ymax> </bndbox>

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 67

Page 73: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

</object></annotation>

Image Semantic Segmentation

The training dataset must be stored in the OBS bucket. The following shows theOBS path structure of the dataset:

|-- data_url |--Image |--a.jpg |--b.jpg ... |--Label |--a.jpg |--b.jpg ... |--train.txt |--val.txt

Description:

● data_url, Image, and Label indicate the OBS folder names. The Image folderstores images for semantic segmentation, and the Label folder stores labeledimages.

● The name and format of the images for semantic segmentation must be thesame as those of the corresponding labeled images. Images in JPG, JPEG,PNG, or BMP format are supported.

● In the preceding code snippet, train.txt and val.txt are two list files. train.txtis the list file of the training set, and val.txt is the list file of the validation set.It is recommended that the ratio of the training set to the validation set be8:2.In the list file, the relative paths of images and labels are separated by spaces.Different pieces of data are separated by newline characters. The followinggives an example:Image/a.jpg Label/a.jpgImage/b.jpg Label/b.jpg...

3.2.3 Algorithms and Their Running ParametersThis section describes the built-in algorithms supported by ModelArts and therunning parameters supported by each algorithm. You can set running parametersfor a training job as required.

yolo_v3

Table 3-2 Algorithm description

Parameter Description

Name yolo_v3

Usage Object class and location

Engine Type MXNet, MXNet-1.2.1-python2.7

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 68

Page 74: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Precision 81.7%(mAP)mAP is an indicator that measures the effect of anobject detection algorithm. For object detection tasks,the precision rate (Precision) and recall rate (Recall)can be calculated for each class of object. The rates canbe calculated and tested multiple times for each classof object based on different thresholds, and a P-R curveis obtained accordingly. The area under the curve is theaverage value.

Training Dataset PASCAL VOC2007, detection of 20 classes of objects

Data Format shape: [H>=224, W>=224, C>=1]; type: int8

Running Parameter lr=0.0001 ; mom=0.9 ; wd=0.0005For more available running parameters, see Table 3-3.

Table 3-3 Running parameters

OptionalParameter

Parameter description Default Value

lr Learning rate 0.0001

mom Momentum of the training network 0.9

wd Parameter weight decay coefficient,L2

0.0005

num_classes Total number of image classes intraining. You do not need to plus 1here.

None

split_spec Split ratio of the training set andvalidation set

0.8

batch_size Total number of training imagesupdated each time

4

eval_frequence Frequency for validating the model.By default, validation is performedevery epoch.

1

num_epoch Number of training epochs 10

num_examples Total number of images used fortraining. For example, if the totalnumber of images is 1,000, theimages used for training is 800based on the split ratio.

16551

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 69

Page 75: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

OptionalParameter

Parameter description Default Value

disp_batches The loss and training speed of themodel is displayed every N batches.

20

warm_up_epochs Number of epochs when the targetlearning rate of the warm-upstrategy is reached

0

lr_steps Number of epochs when thelearning rate attenuates in themulti-factor strategy. By default, thelearning rate attenuates to 0.1 timesof the original value at the 10th and15th epochs.

10,15

retinanet_resnet_v1_50

Table 3-4 Algorithm description

Parameter Description

Name retinanet_resnet_v1_50

Usage Object class and location

Engine Type TensorFlow, TF-1.8.0-python2.7

Precision 83.15%(mAP)mAP is an indicator that measures the effect of anobject detection algorithm. For object detection tasks,the precision rate (Precision) and recall rate (Recall)can be calculated for each class of object. The rates canbe calculated and tested multiple times for each classof object based on different thresholds, and a P-R curveis obtained accordingly. The area under the curve is theaverage value.

Training Dataset ImageNet-1k; [H, W, C=3]

Data Format shape: [H, W, C>=1]; type: int8

Running Parameter By default, no running parameters are set for thealgorithm. For more available running parameters, seeTable 3-5.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 70

Page 76: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Table 3-5 Running parameters

OptionalParameter

Parameter description DefaultValue

split_spec Split ratio of the training set and validationset

train:0.8,eval:0.2

num_gpus Number of used GPUs 1

batch_size Number of images for each iteration(standalone). To ensure the algorithmprecision, you are advised to use thedefault value.

The value isfixed to 1.

learning_rate_strategy

Learning rate strategy. The value rangesfrom 0 to 1. For example, the value can beset to 0.001.

0.002

evaluate_every_n_epochs

A validation is performed after N epochsare trained.

1

save_interval_secs Interval for saving the model. The unit issecond.If model running time is greater than2,000,000s, the model is saved once every2,000,000s by default. If model runningtime is less than 2,000,000s, the model issaved when the running is complete.

2000000

max_epoches Maximum number of training epochs 100

log_every_n_steps Logs are printed every N steps. By default,logs are printed every 10 steps.

10

save_summaries_steps

Summary information is saved every fivesteps, including the model gradient updatevalue and training parameters.

5

weight_decay L2 regularization weight decay 0.00004

optimizer Optimizer. The options are as follows:● dymomentumw● sgd● adam● momentum

momentum

momentum Optimizer parameter momentum 0.9

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 71

Page 77: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

OptionalParameter

Parameter description DefaultValue

patience After training of N epochs, if the precision(mAP for object detection and accuracy forimage classification) does not increasecompared with the previous maximumvalue, that is, the difference between theprecision and the maximum precision isless than the value of decay_min_delta,the learning rate attenuates to one tenthof the original value. The default value ofN is 8.

8

decay_patience After training of extra M epochs on thebasis of the preceding patience, if theprecision (mAP for object detection andaccuracy for image classification) does notincrease, that is, the difference betweenthe precision and the maximum precision isless than the value of decay_min_delta,training will be terminated early. Thedefault value of M is 1.

1

decay_min_delta Minimum difference between the precision(mAP for object detection and accuracy forimage classification) corresponding todifferent learning rates. If the parametervalue is greater than 0.001, the precision isincreased. Otherwise, the precision is notincreased.

0.001

rcnn_iou_threshold

IoU threshold used for calculating the mapwhen SSD or Faster R-CNN are used

0.5

inception_v3

Table 3-6 Algorithm description

Parameter Description

Name inception_v3

Usage Image Classification

Engine Type TensorFlow, TF-1.8.0-python2.7

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 72

Page 78: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Precision 78.00%(top1), 93.90%(top5)● top1 indicates that the classification is considered

correct only when the image with the maximumprobability is the correct image.

● top5 indicates that the classification is consideredcorrect only when the correct image is within thetop 5 images.

Training Dataset ImageNet, classification of 1,000 image classes

Data Format shape: [H, W, C>=1]; type: int8

Running Parameter batch_size=32 ; split_spec=train:0.8,eval:0.2 ;For more available running parameters, see Table 3-7.

Table 3-7 Running parameters

OptionalParameter

Parameter description DefaultValue

split_spec Split ratio of the training set and validationset

train:0.8,eval:0.2

num_gpus Number of used GPUs 1

batch_size Number of images for each iteration(standalone). To ensure the algorithmprecision, you are advised to use thedefault value.

32

eval_batch_size Number of images read each step duringvalidation (standalone)

32

learning_rate_strategy

Learning rate strategy. For example,10:0.001,20:0.0001 indicates that thelearning rate for 0 to 10 epochs is 0.001,and that for 10 to 20 epochs is 0.0001.

0.002

evaluate_every_n_epochs

A validation is performed after N epochsare trained.

1

save_interval_secs Interval for saving the model. The unit issecond.If model running time is greater than2,000,000s, the model is saved once every2,000,000s by default. If model runningtime is less than 2,000,000s, the model issaved when the running is complete.

2000000

max_epoches Maximum number of training epochs 100

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 73

Page 79: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

OptionalParameter

Parameter description DefaultValue

log_every_n_steps Logs are printed every N steps. By default,logs are printed every 10 steps.

10

save_summaries_steps

Summary information is saved every fivesteps, including the model gradient updatevalue and training parameters.

5

weight_decay L2 regularization weight decay 0.00004

optimizer Optimizer. The options are as follows:● dymomentumw● sgd● adam● momentum

momentum

momentum Optimizer parameter momentum 0.9

patience After training of N epochs, if the precision(mAP for object detection and accuracy forimage classification) does not increasecompared with the previous maximumvalue, that is, the difference between theprecision and the maximum precision isless than the value of decay_min_delta,the learning rate attenuates to one tenthof the original value. The default value ofN is 8.

8

decay_patience After training of extra M epochs on thebasis of the preceding patience, if theprecision (mAP for object detection andaccuracy for image classification) does notincrease, that is, the difference betweenthe precision and the maximum precision isless than the value of decay_min_delta,training will be terminated early. Thedefault value of M is 1.

1

decay_min_delta Minimum difference between the precision(mAP for object detection and accuracy forimage classification) corresponding todifferent learning rates. If the parametervalue is greater than 0.001, the precision isincreased. Otherwise, the precision is notincreased.

0.001

image_size Size of the input image. If this parameter isset to None, the default image sizeprevails.

None

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 74

Page 80: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

OptionalParameter

Parameter description DefaultValue

lr_warmup_strategy

Warm-up strategy (linear or exponential) linear

num_readers Number of threads for reading data 64

fp16 Whether to use FP16 for training FALSE

max_lr Maximum learning rate for thedymomentum and dymomentumwoptimizers, or when use_lr_schedule isused

6.4

min_lr Minimum learning rate for thedymomentum and dymomentumwoptimizers, or when use_lr_schedule isused

0.005

warmup Proportion of warm-up in total trainingsteps. This parameter is valid whenuse_lr_schedule is lcd or poly.

0.1

cooldown Minimum learning rate in the warm-up 0.05

max_mom Maximum momentum. This parameter isvalid for dynamic momentum.

0.98

min_mom Minimum momentum. This parameter isvalid for dynamic momentum.

0.85

use_lars Whether to use LARS FALSE

use_nesterov Whether to use Nesterov Momentum TRUE

preprocess_threads

Number of threads for imagepreprocessing

12

use_lr_schedule Learning rate adjustment policy('lcd':linear_cosine_decay,'poly':polynomial_decay)

None

darknet_53

Table 3-8 Algorithm description

Parameter Description

Name darknet_53

Usage Image Classification

Engine Type MXNet, MXNet-1.2.1-python2.7

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 75

Page 81: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Precision 78.56%(top1), 94.43%(top5)● top1 indicates that the classification is considered

correct only when the image with the maximumprobability is the correct image.

● top5 indicates that the classification is consideredcorrect only when the correct image is within thetop 5 images.

Training Dataset ImageNet, classification of 1,000 image classes

Data Format shape: [H>=224, W>=224, C>=1]; type: int8

Running Parameter split_spec=0.8 ; batch_size=4 ;For more available running parameters, see Table 3-9.

Table 3-9 Running parameters

OptionalParameter

Parameter description Default Value

num_classes Total number of image classes in training None

num_epoch Number of training epochs 10

batch_size Total amount of input data each time theparameters are updated

4

lr Learning rate 0.0001

image_shape Shape of the input image 3,224,224

split_spec Split ratio of the training set andvalidation set

0.8

save_frequency Interval for saving the model, indicatingthat the model is saved every N epochs

1

SegNet_VGG_BN_16

Table 3-10 Algorithm description

Parameter Description

Name SegNet_VGG_BN_16

Usage Image semantic segmentation

Engine Type MXNet, MXNet-1.2.1-python2.7

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 76

Page 82: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Precision 89%(pixel acc)pixel acc indicates the ratio of correct pixels to totalpixels.

Training Dataset Camvid

Data Format shape: [H=360, W=480, C==3]; type: int8

Running Parameter deploy_on_terminal=False; deploy_on_terminal=FalseFor more available running parameters, see Table 3-11.

Table 3-11 Running parameters

OptionalParameter

Parameter description Default Value

lr Learning rate of the updated parameters 0.0001

mom Momentum of the training network 0.9

wd Attenuation coefficient 0.0005

num_classes Total number of image classes in training.You do not need to plus 1 here.

11

batch_size Total number of training images updatedeach time

8

num_epoch Number of training epochs 15

save_frequency Interval for saving the model, indicatingthat the model is saved every N epochs

1

num_examples Total number of images used for training,which indicates the number of files intrain.txt

2953

data_shape Shape of the input image 3,256,256

optimizer Optimizer. The default value is sgd.Another option is nag.

sgd

lr_steps Number of epochs when the learning rateattenuates in the multi-factor strategy. Bydefault, the learning rate attenuates to 0.1times of the original value at the 10th and15th epochs.

7,12

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 77

Page 83: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

ResNet_v2_50

Table 3-12 Algorithm description

Parameter Description

Name ResNet_v2_50

Usage Image Classification

Engine Type MXNet, MXNet-1.2.1-python2.7

Precision 75.55%(top1), 92.6%(top5)● top1 indicates that the classification is considered

correct only when the image with the maximumprobability is the correct image.

● top5 indicates that the classification is consideredcorrect only when the correct image is within thetop 5 images.

Training Dataset ImageNet, classification of 1,000 image classes

Data Format shape: [H>=32, W>=32, C>=1]; type: int8

Running Parameter split_spec=0.8 ; batch_size=4 ;The available running parameters are the same asthose for the darknet_53 algorithm. For details, seeTable 3-9.

ResNet_v1_50

Table 3-13 Algorithm description

Parameter Description

Name ResNet_v1_50

Usage Image Classification

Engine Type TensorFlow, TF-1.8.0-python2.7

Precision 74.2%(top1), 91.7%(top5)● top1 indicates that the classification is considered

correct only when the image with the maximumprobability is the correct image.

● top5 indicates that the classification is consideredcorrect only when the correct image is within thetop 5 images.

Training Dataset ImageNet, classification of 1,000 image classes

Data Format shape: [H>=600,W<=1024,C>=1];type:int8

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 78

Page 84: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Running Parameter batch_size=32 ; split_spec=train:0.8,eval:0.2 ;The available running parameters are the same asthose for the inception_v3 algorithm. For details, seeTable 3-7.

Faster_RCNN_ResNet_v2_101

Table 3-14 Algorithm description

Parameter Description

Name Faster_RCNN_ResNet_v2_101

Usage Object class and location

Engine Type MXNet, MXNet-1.2.1-python2.7

Precision 80.05%(mAP)mAP is an indicator that measures the effect of anobject detection algorithm. For object detection tasks,the precision rate (Precision) and recall rate (Recall)can be calculated for each class of object. The rates canbe calculated and tested multiple times for each classof object based on different thresholds, and a P-R curveis obtained accordingly. The area under the curve is theaverage value.

Training Dataset PASCAL VOC2007, detection of 20 classes of objects

Data Format shape: [H, W, C==3]; type: int8

Running Parameter lr=0.0001 ; eval_frequence=1 ;For more available running parameters, see Table 3-15.

Table 3-15 Running parameters

OptionalParameter

Parameter description Default Value

num_classes Total number of image classes in training.The value must plus 1 because there is abackground class.

None

eval_frequence Frequency for validating the model. Bydefault, validation is performed everyepoch.

1

lr Learning rate 0.0001

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 79

Page 85: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

OptionalParameter

Parameter description Default Value

mom Momentum of the training network 0.9

wd Parameter weight decay coefficient, L2 0.0005

export_model Whether the generated model is generatedas the format required for deploying theinference service

TRUE

split_spec Split ratio of the training set and validationset

0.8

optimizer Optimizer. The default value is sgd. Anotheroption is nag.

sgd

Faster_RCNN_ResNet_v1_50

Table 3-16 Algorithm description

Parameter Description

Name Faster_RCNN_ResNet_v1_50

Usage Object class and location

Engine Type TensorFlow, TF-1.8.0-python2.7

Precision 73.6%(mAP)mAP is an indicator that measures the effect of anobject detection algorithm. For object detection tasks,the precision rate (Precision) and recall rate (Recall)can be calculated for each class of object. The rates canbe calculated and tested multiple times for each classof object based on different thresholds, and a P-R curveis obtained accordingly. The area under the curve is theaverage value.

Training Dataset PASCAL VOC2007, detection of 20 classes of objects

Data Format shape: [H>=600,W<=1024,C>=1];type:int8

Running Parameter batch_size=32 ; split_spec=train:0.8,eval:0.2 ;The available running parameters are the same asthose for the retinanet_resnet_v1_50 algorithm. Fordetails, see Table 3-5.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 80

Page 86: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

3.3 Creating a Training JobAfter data preparation is complete, create a training job for model training basedon existing data. Training is automatically performed once each time when atraining job is created.

Prerequisites● Data has been prepared. Specifically, you have created an available dataset in

ModelArts, or you have uploaded the dataset used for training to the OBSdirectory.

● At least one empty folder has been created on OBS for storing the trainingoutput.

● Ensure that the account is not in arrears. Resources are consumed whentraining jobs are running.

● Ensure that the OBS directory you use and ModelArts are in the same region.

PrecautionsIn the dataset directory specified for a training job, the length of the file name(such as the image file, audio file, and labeling file) containing data used fortraining contains 0 to 255 characters. If the names of certain files in the datasetdirectory contain over 255 characters, the training job will ignore these files anduse data in the valid files for training. If the names of all files in the datasetdirectory contain over 255 characters, no data is available for the training job andthe training job fails.

Creating a Training Job1. Log in to the ModelArts management console. In the left navigation pane,

choose Training Management > Training Jobs. By default, the systemswitches to the Training Jobs page.

2. In the upper left corner of the training job list, click Create to switch to theCreate Training Job page.

3. Set related parameters and click Next.

a. Set the basic information, including Billing Mode, Name, Version, andDescription. Currently, the Billing Mode supports only Pay-per-use. TheVersion information is automatically generated by the system and namedin an ascending order of V001, V002, and so on. You cannot manuallymodify it.Specify Name and Description according to actual requirements.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 81

Page 87: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 3-2 Setting basic information about the training job

b. Set job parameters, including Data Source, Algorithm Source, and more.For details, see Table 3-17.

Figure 3-3 Setting job parameters

Table 3-17 Job parameter description

Parameter

Sub-Parameter

Description

One-ClickConfiguration

- If you have saved job parameter configurationsin ModelArts, click One-Click Configurationand select an existing job parameterconfiguration as prompted to quickly completeparameter setting for the training job.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 82

Page 88: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter

Sub-Parameter

Description

DataSource

Datasets Select an available dataset and its version.● Dataset: Select an existing dataset from the

drop-down list box. If no dataset is availablein ModelArts, no result will be displayed inthe drop-down list box.

● Version: Select a version according to theDataset setting.

For a training job, you can select multiple

datasets by clicking . To delete a dataset,

click in the row for it.

DataSource

Data path Select the training data from the OBS bucket.On the right of the Data path text box, clickSelect. In the dialog box that is displayed,select an OBS folder for storing data.If you set Algorithm Source to Frequently-used, you can select multiple data storage

paths for a training job by clicking . To

delete a data storage path, click in the rowfor it.

AlgorithmSource

Built-in Select a preset algorithm in ModelArts. Fordetails, see Introduction to Built-inAlgorithms.

RunningParameter

- Set the key running parameters in code. Ensurethat the parameter names are the same asthose in code.For example, train_steps = 10000, wheretrain_steps is a passing parameter in code.

TrainingOutputPath

- Storage path of the training resultNOTE

To avoid errors, you are advised to select an emptydirectory for Training Output Path. Do not selectthe directory used for storing the datasets forTraining Output Path.

Job LogPath

- Path for storing the log files generated duringjob running

c. Select resources for the training job.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 83

Page 89: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 3-4 Selecting resources for the training job

Table 3-18 Resource parameter description

Parameter Description

Resource Pool Select the resource pool for the training job. Fortraining jobs, public resource pools and dedicatedresource pools are available.Instances in the public resource pool can be of the CPUor GPU type. Pricing standards for resource pools withdifferent instance types are different. For details, seeProduct Pricing Details. For details about how tocreate a dedicated resource pool, see Resource Pools.

Type If you select a public resource pool, select a resourcetype. Available resource types are CPU and GPU.NOTE

If GPU resources are used in training code, you must select aGPU cluster when selecting a resource pool. Otherwise, thetraining job may fail.

Specifications Select resource specifications based on the resourcetype. The following specifications are supported:● CPU: CPU: 2 vCPUs 8 GiB and CPU: 8 vCPUs 32 GiB● GPU: CPU: 8 vCPUs 64 GiB GPU: 1 x nvidia-p100

16 GiB and CPU: 32 vCPUs 256 GiB GPU: 4 xnvidia-p100 16 GiB

ComputeNodes

Set the number of compute nodes. If you set ComputeNodes to 1, the standalone computing mode is used. Ifyou set Compute Nodes to a value greater than 1, thedistributed computing mode is used. Select acomputing mode based on the actual requirements.When Frequently-used of Algorithm Source is set toCaffe, only standalone training is supported, that is,Compute Nodes must be set to 1. For other options ofFrequently-used, you can select the standalone ordistributed mode based on service requirements.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 84

Page 90: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

d. Configure the subscription function and set whether to save theparameter settings of the training job.

Figure 3-5 Configuring notifications for the training job

Table 3-19 Parameters related to the subscription function andparameter setting saving

Parameter Description

Notification Select the resource pool status to be monitored fromthe event list, and SMN sends a notification messagewhen the event occurs.This parameter is optional. You can choose whether toenable subscription based on the actual requirements.If you enable subscription, set the followingparameters as required:● Topic: indicates the topic name. You can refer to

Create Topic to create a topic on the SMN console.● Event: indicates the event to be subscribed to. The

options are OnJobRunning, OnJobSucceeded, andOnJobFailed, indicating that the training is inprogress, successful, and failed, respectively.

Save TrainingParameters

If you select this option, the parameter settings of thecurrent training job will be saved to facilitatesubsequent job creation.Select Save Training Parameters, and specifyConfiguration Name and Description. After a trainingjob is created, you can switch to the Job ParameterMgmt tab page to view your saved job parameterconfiguration. For details, see Managing JobParameters.

e. After setting the parameters, click Next.

4. On the displayed Specifications page, confirm that the information is correctand click Next. Generally, training jobs run for a period of time, which may beseveral minutes or tens of minutes depending on the amount of your selecteddata and resources.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 85

Page 91: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

After a training job is created, it is started immediately. During the running, you willbe charged based on your selected resources.

You can switch to the training job list to view the basic information abouttraining jobs. In the training job list, Status of the newly created training jobis Initializing. If the status changes to Successful, the training job ends andthe model generated is stored in the location specified by Training OutputPath. If the status of a training job changes to Running failed, click the nameof the training job, click Logs to view the job logs, and troubleshoot the faultbased on the log.

3.4 Managing Training Job VersionsDuring model building, you may need to frequently tune the data, trainingparameters, or the model based on the training results to obtain a satisfactorymodel. ModelArts allows you to manage training job versions to effectively trainyour model after the tuning. Specifically, ModelArts generates a version each timewhen a training is performed. You can quickly get the difference between differentversions.

Viewing Training Job Versions1. Log in to the ModelArts management console. In the left navigation pane,

choose Training Management > Training Jobs. By default, the systemswitches to the Training Jobs page.

2. In the training job list, click the name of a training job.

By default, the basic information about the latest version is displayed. If there

are multiple available versions, click in the upper left

corner to view a certain version. Click to the left of the version to displayjob details. For details about the training job, see Training Job Details.

Figure 3-6 Viewing training job versions

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 86

Page 92: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Comparing Versions of a Training JobOn the Version Manager page, click View Comparison Result to view thecomparison of all or selected versions of the current training job. The comparisonresult involves the following information: Running Parameter, F1 Score, Recall,Precision, and Accuracy.

Figure 3-7 Comparing versions of a training job

Shortcut Operations Based on Training Job VersionsOn the Version Manager page, ModelArts provides certain shortcut operationbuttons for you to quickly enter the subsequent steps after model training iscomplete.

Table 3-20 Shortcut operation button description

ShortcutOperationButton

Description

CreatingVisualization Job

Creates a visualization job (TensorBoard) for the currenttraining version. For details about how to create aTensorBoard job, see Managing Visualization Jobs.NOTE

Currently, TensorBoard supports only the TensorFlow and MXNetengines. Therefore, you can create the TensorBoard jobs only whenthe AI engine is TensorFlow or MXNet.

Create Model Creates a model for the current training version. For detailsabout how to create a model, see Importing a Model. Youcan only create models for training jobs in the Runningstate.

Modify If the training result of the current version does not meetservice requirements or the training job fails, click Modifyto switch to the page where you can modify the jobparameter settings. For details about the parameters of thetraining job, see Creating a Training Job. After modifyingthe job parameter settings as required, click OK to start thetraining job of a new version.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 87

Page 93: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

ShortcutOperationButton

Description

Save TrainingParameters

You can save the job parameter settings of this version as ajob parameter configuration, which will be displayed on theJob Parameter Mgmt page. Click More > Save TrainingParameters to switch to the Training Parameter page.After confirming that the settings are correct, click OK. Fordetails about training parameter management, seeManaging Job Parameters.

Stop Click More > Stop to stop the training job of the currentversion. Only training jobs in the Running state can bestopped.

Delete Click More > Delete to delete the training job of thecurrent version.

Figure 3-8 Shortcut operation buttons

3.5 Viewing Job DetailsAfter a training job finishes, you can manage the training job versions and checkwhether the training result of the job is satisfactory by viewing the job details,evaluation details, and traceback diagrams.

Training Job DetailsIn the left navigation pane of the ModelArts management console, chooseTraining Management > Training Jobs to switch to the Training Jobs page. Inthe training job list, click a job name to view the job details.

Table 3-21 lists parameters of the training job of each version.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 88

Page 94: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 3-9 Training job details

Table 3-21 Training job details

Parameter Description

V0017 Version of a training job, which is automatically defined bythe system, for example, V001 and V002.

Status Status of a training job, including Deploying, Running,Successful, Running failed, and Canceled.

Duration Running duration of a training job

Configurations Details about the parameters of the current training jobversion

Logs Logs of the current training job version. If you set LogOutput Path when creating a training job, you can clickthe download button on the Logs tab page to downloadthe logs stored in the OBS bucket to the local host.

Resource Usages Usage of resources of the current training version,including the CPU, GPU, and memory.

Evaluation Result Evaluation result of the current training job. For detailsabout the parameters, see Evaluation Result.

Traceback Diagrams

In the training job list, click a job name to switch to the training job details page.By default, the Version Manager page is displayed. Click the Traceback Diagramstab.

On the Traceback Diagrams tab page, you can view information about Data,Algorithm, Training Job, Model, and Web Service for the current training job.You can also compare the traceback diagrams of two training job versions.

In the area on the Traceback Diagrams tab page, click an element. The detailedinformation about the element is displayed on the right pane of the page. Inaddition, the follow-up operations that you can perform on this element are alsodisplayed. The following figure illustrates an example. Detailed parameter settingsof the selected training job are displayed in the right pane. In addition, shortcutoperations indicated by TensorBoard, Create Model, Retrain, and Save JobParameters are supported.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 89

Page 95: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 3-10 Viewing the traceback diagram

Evaluation ResultIn the left navigation pane of the ModelArts management console, chooseTraining Management > Training Jobs to switch to the Training Jobs page. Aftera training job is successfully executed, click the job name to switch to the VersionManager tab page. Click the Evaluation Result tab to view details about theevaluation result of the training job. The details include the label list and portalsof the matrix view and custom matrix view. You can query the result based on thelabel name. For details, see Table 3-22.

Figure 3-11 Evaluation Result tab page

Table 3-22 Content on the Evaluation Result tab page

Functional Module Description

Label list The label list contains Label, Precision, Recall, F1 Score,and False Discovery.

Matrix View Click Matrix View to switch to the Matrix View page toview details. The matrix view is displayed by default.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 90

Page 96: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Functional Module Description

Custom Matrix View Select the target label and click Custom Matrix View toview details about the target label. If no label is selected,Custom Matrix View is gray.

Label Details

On the Evaluation Result page, click a label name to view the label predictionresult. The label details page lists the label information, prediction result, falsediscovery source, and portal of the matrix view. See Figure 3-12.

Figure 3-12 Label details page

Table 3-23 Description of the label details page

Functional Module Description

Label information Includes Precision, Recall, F1 Score, False DiscoveryRate, and Images with the Label.

Prediction Involves Label, Images, and Proportion. You can sortprediction labels by Images or Proportion. Predictionresults can be displayed on multiple pages.

False DiscoverySource

Lists images that are incorrectly labeled. Click an imageto display its label information and prediction result.

Matrix View Click a label name to switch to the matrix view page.

Matrix View

The Matrix View page displays the following function modules: label matrix view,matrix grid scaling, label filter, and prediction feature portal.

1. On the Evaluation Result, view the matrix view.

– To view matrix views of all labels:

▪ On the Evaluation Result tab page, click Matrix View to switch tothe Matrix View page, where you can view details about all labels.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 91

Page 97: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

▪ On the Evaluation Result tab page, click the target label name. Onthe displayed label details page, click Matrix View to view detailsabout the selected label.

– To view the matrix view of the target label

▪ On the Evaluation Result tab page, select the target label, and clickCustom Matrix View to switch to the Custom Matrix View, whereyou can view details about your selected label.

▪ On the Evaluation Result tab page, click the target label name. Onthe displayed label details page, click the label name in the label listunder Prediction to view details about the selected label.

2. Switch to the Matrix View page, where you can filter labels, set the numberof labels to be displayed on this page, and view data and thumbnails.

Figure 3-13 Matrix View page

– A label matrix view can be displayed in data or thumbnail mode. Thevertical coordinate indicates labels and the horizontal coordinateindicates prediction results. Results (recall rate and number of images)are presented in a matrix.

– Filter Label: Click Filter Label. In the displayed Filter Label dialog box,select the labels and prediction results to be displayed in the matrix gridand click OK.

– In the matrix grid, you can set the number of labels and prediction resultsthat can be directly seen without dragging the vertical or horizontal scrollbar. Available options include 10 prediction results and 10 labels, 15prediction results and 15 labels, and 20 prediction results and 20 labels.You can set the number of labels and prediction results displayed on the

matrix grid by setting values in in the upper rightcorner. If the number of your selected labels or prediction results to bedisplayed in the matrix grid is greater than your setting next to FilterLabel, you can drag the vertical or horizontal scroll bar to view the labelsor prediction results that cannot be directly seen.

3. On the matrix view page, click Data or Thumbnail and click a grid with datato enter the prediction feature GUI. The prediction feature GUI displays thebasic label prediction information, normal view and heat map, and predictiondetails of a single image.– The label basic prediction information includes the recognition result,

recall rate, and number of images.– Normal view and heat map: Click Normal View or Heat Map and choose

the label and prediction result from the vertical and horizontal

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 92

Page 98: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

coordinates in the matrix grid to display the corresponding normal viewor heat map.

– Click an image in the matrix grid to view prediction details of the image,which are displayed in the lower part of the page and involve the originalimage and the corresponding heat map as well as Class and Confidencescore of the image.

Figure 3-14 Prediction feature GUI

4. On the prediction feature visualization page, specify the value in toset the number of images displayed on the page.

3.6 Managing Job ParametersYou can store the parameter settings in ModelArts during job creation so that youcan use the stored settings to create follow-up training jobs, which makes jobcreation more efficient.

During the operations of creating, editing, and viewing training jobs, the saved jobparameter settings are displayed on the Job Parameter Mgmt page.

Using a Job Parameter Configuration● Method 1: Using a job parameter configuration on the Job Parameter Mgmt

pageLog in to the ModelArts management console. In the left navigation pane,choose Training Management > Training Jobs. On the displayed page, clickthe Job Parameter Mgmt tab. In the job parameter list, click CreatingTraining Job for a job parameter configuration to create a training job basedon the job parameter configuration.

● Method 2: Using a job parameter configuration on the Creating Training JobpageOn the Creating Training Job page, click One-Click Configuration. In thedisplayed dialog box, select the required job parameter configuration toquickly create an available training job.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 93

Page 99: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 3-15 Job parameter configuration management

Editing a Job Parameter Configuration1. Log in to the ModelArts management console. In the left navigation pane,

choose Training Management > Training Jobs. On the displayed page, clickthe Job Parameter Mgmt tab.

2. In the job parameter configuration list, click Edit in the Operation column ina row.

3. On the displayed page, modify related parameters by referring to Table 3-17and click OK to save the job parameter settings.In the existing job parameter settings, the job name cannot be changed.

Deleting a Training Job Parameter Configuration1. Log in to the ModelArts management console. In the left navigation pane,

choose Training Management > Training Jobs. On the displayed page, clickthe Job Parameter Mgmt tab.

2. In the job parameter list, click Delete in the Operation column in a row.3. In the displayed dialog box, click OK.

Deleted job parameter configurations cannot be recovered. Therefore, exercise cautionwhen performing this operation.

3.7 Managing Visualization JobsThe ModelArts visualization jobs you manage are of the TensorBoard type bydefault.

TensorBoard is a tool that can effectively display the computational graph ofTensorFlow in the running process, the trend of various metrics in time, and thedata used in the training. Currently, TensorBoard supports only the training jobsbased on the TensorFlow and MXNet engines. For more information aboutTensorBoard, see the TensorBoard official website.

For training jobs using TensorFlow and MXNet, you can use the summary filegenerated during model training to create a TensorBoard job.

PrerequisitesTo ensure that the summary file is generated in the training result, you need toadd the related code to the training script.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 94

Page 100: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

● Using the TensorFlow engine:When using the TensorFlow-based MoXing, in mox.run, setsave_summary_steps>0 and summary_verbosity≥1.If you want to display other metrics, add tensors to log_info in the returnvalue mox.ModelSpec of model_fn. Only the rank-0 tensors (scalars) aresupported. The added tensors are written into the summary file. If you wantto write tensors of higher ranks in the summary file, use the nativetf.summary of TensorFlow in model_fn.

● Using the MXNet engine:Add the following code to the script:batch_end_callbacks.append(mx.contrib.tensorboard.LogMetricsCallback('S3 path'))

Precautions● You will be charged as long as your visualization jobs are in the Running

status. We recommend you to stop the visualization jobs when you no longerneed them to avoid unnecessary fees. Visualization jobs can be automaticallystopped at the specified time. To avoid unnecessary fees, you are advised toenable this function.

● By default, CPU resources are used to run visualization jobs and cannot bechanged to other resources.

● Ensure that the OBS directory you use and ModelArts are in the same region.

Creating a Visualization Job1. Log in to the ModelArts management console. In the left navigation pane,

choose Training Jobs. On the displayed page, click the Visualization Jobs tab.2. In the upper left corner of the visualization job list, click Create to switch to

the Create Visualization Job page.3. Set Billing Mode to Pay-per-use and Job Type to Visualization Job. Enter

the visualization job name and description as required, set the TrainingOutput Path and Auto Stop parameters.– Training Output Path: Select the training output path specified when the

training job is created.– Auto Stop: Enable or disable the auto stop function. A running

visualization job will be billed. To avoid unnecessary fees, you can enablethe auto stop function to automatically stop the visualization job at thespecified time. The options are 1 hour later, 2 hours later, 4 hours later,6 hours later, and Custom. If you select Custom, you can enter anyinteger within 1 to 24 hours in the textbox on the right.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 95

Page 101: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 3-16 Creating a visualization job

4. Click Next.5. After confirming the specifications, click Next.

In the visualization job list, when the status changes to Running, theTensorBoard job has been created. You can click the name of the visualizationjob to view its details.

Opening a Visualization JobIn the visualization job list, click the name of the target visualization job. TheTensorBoard page is displayed. See Figure 3-17. Only the visualization job in theRunning status can be opened.

Figure 3-17 TensorBoard page

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 96

Page 102: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Running or Stopping a Visualization Job● Stopping a visualization job: You can stop a running visualization job to stop

billing when it is no longer needed. In the visualization job list, click Stop inthe Operation column to stop the visualization job.

● Running a visualization job: You can run and use a visualization job in theCanceled state again. In the visualization job list, click Run in the Operationcolumn to run the visualization job.

Deleting a Visualization JobIf your visualization job is no longer used, you can delete it to release resources. Inthe visualization job list, click Delete in the Operation column to delete thevisualization job.

A deleted visualized job cannot be recovered. You need to create a new visualization job ifyou want to use it. Exercise caution when performing this operation.

ModelArtsUser Guide (AI Beginners) 3 Training Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 97

Page 103: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

4 Model Management

4.1 Model Management OverviewAI model development and optimization require frequent iterations anddebugging. Changes in datasets, training code, or parameters may affect thequality of models. If the metadata of the development process cannot bemanaged in a unified manner, the optimal model may fail to be reproduced.

ModelArts model management allows you to import models generated with alltraining versions to manage all iterated and debugged models in a unifiedmanner. You can also trace back your models with traceback diagrams of datasets,training, and models.

Usage Restrictions● In an automatic learning project, after a model is deployed, the model is

automatically uploaded to the model management list. However, modelsgenerated by automatic learning cannot be downloaded and can be used onlyfor deployment and rollout.

● Functions such as model import, model version management, and modelconversion are available to all users free of charge.

Four Methods of Importing a Model● Importing from Trained Models: You can create a training job on ModelArts

and complete model training. After obtaining a satisfactory model, import themodel to the Model Management page for model deployment.

● Importing from a Template: Because the configurations of models of thesame function are similar, ModelArts integrates the configurations of suchmodels into a common template. By using this template, you can easily andquickly import models without compiling the config.json configuration file.

● Importing from a Container Image: For AI engines that are not supported byModelArts, you can import the model you compile to ModelArts using customimages.

● Importing from OBS: If you use a frequently-used framework to develop andtrain a model locally, you can import the model to ModelArts for modeldeployment.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 98

Page 104: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Model Management Functions

Table 4-1 Model management functions

SupportedFunction

Description

Importing aModel

Import the trained models to ModelArts for unifiedmanagement. You can import models using four methods.The following provides the operation guide for eachmethod.● Importing a Meta Model from a Training Job● Importing a Meta Model from a Template● Importing a Meta Model from a Container Image● Importing a Meta Model from OBS

Managing ModelVersions

To facilitate source tracing and repeated model tuning,ModelArts provides the model version managementfunction. You can manage models based on versions.

Publishing aModel

Models imported to ModelArts can be published to AIMarket or submitted to challenges.

Compressing andConvertingModels

To obtain higher computing power, you can deploy themodels created on ModelArts or a local PC on the Ascendchip, ARM, or GPU. In this case, you need to compress orconvert the models to the required formats beforedeploying them.

Model PackageSpecifications

When importing a model to ModelArts, comply with theModelArts specifications. For details, see the modelpackage specifications, model package definitions,inference code, and configuration files.

4.2 (Optional) Purchasing Model TuningModelArts provides professional model tuning services. If you are not satisfiedwith an existing model and not willing to adjust the model by yourself, you canpurchase the model tuning service to help you optimize the model.

Purchasing Model Tuning1. Log in to the ModelArts management console. In the Quick Links area on the

right of the Dashboard page, click Buy Model Tuning.

2. On the Buy Model Tuning page, enter the amount and select I have readand agree to the Payment Statement in the lower right corner, and clickSubmit.

After the submission, HUAWEI CLOUD engineers will contact you to get toknow your specific requirements.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 99

Page 105: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 4-1 Purchasing model tuning

4.3 Importing a Model

4.3.1 Importing a Meta Model from a Training JobYou can create a training job on ModelArts and train the job before obtaining asatisfactory model. Then import the model to Model Management for unifiedmanagement. In addition, you can quickly deploy the model as a service.

Background● If a model generated by the ModelArts training job is used, ensure that the

training job has been successfully executed and the model has been stored inthe corresponding OBS directory.

● If a model is generated from a training job that uses built-in algorithms, themodel can be directly imported to ModelArts without using the inference codeand configuration file.

● If a model is generated from a training job that uses a common framework orcustom image, upload the inference code and configuration file to the storagedirectory of the model by referring to Model Package Specifications.

● Ensure that the OBS directory you use and ModelArts are in the same region.

Procedure1. Log in to the ModelArts management console, and choose Model

Management > Models in the left navigation pane. The Models page isdisplayed.

2. Click Import in the upper left corner. The Import page is displayed.3. On the Import page, set related parameters.

a. Enter basic information about the model. For details about theparameters, see Table 4-2.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 100

Page 106: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Table 4-2 Parameters of the model basic information

Parameter Description

Name Model name. The value can contain 1 to 64 visiblecharacters, including Chinese characters. Only letters,Chinese characters, digits, hyphens (-), and underscores(_) are allowed.

Version Version of the model to be created. For the first import,the default value is 0.0.1.

Label Model label. A maximum of five model labels aresupported.

Description Brief description of the model

b. Select the meta model source and set related parameters. Meta Model

Source has four options based on the scenario. For details, see FourMethods of Importing a Model. Set Meta Model Source to Trainingjob. For details about the parameters, see Table 4-3.

Figure 4-2 Setting Meta Model Source to Training job

Table 4-3 Parameters of the meta model source

Parameter Description

MetaModelSource

Select Training job, and select a specified training jobthat has completed training under the current accountand its version from the drop-down list boxes on the rightof Training Job and Version respectively.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 101

Page 107: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Deployment Type

After the model is imported, select the service type thatthe model is deployed. When deploying a service, you canonly deploy the service type selected here. For example, ifyou only select Real-time services here, you can onlydeploy real-time services after importing the model.ModelArts currently supports the following deploymenttypes: Real-time services, Batch services, and Edgeservices.

InferenceCode

Display the model inference code URL. You can copy thisURL directly.

ParameterConfiguration

Click on the right to view the input and outputparameters of the model.

RuntimeDependency

List the dependencies of the selected model on theenvironment. For example, if tensorflow is used and theinstallation method is pip, the version must be 1.8.0 orlater.

c. Set the inference specifications and model description.

▪ Min. Inference Specs: If your model requires certain resources tocomplete inference, you can configure this parameter to set theminimum specifications required for normal inference after themodel is deployed as a service. In later versions, the system willallocate resources based on the inference specifications you set inservice deployment. You can modify the specifications as requiredduring deployment. Note that the specifications configured here arevalid only when real-time services are deployed and the dedicatedresource pool is used or when edge services are deployed.

▪ Model Description: To help other model developers betterunderstand and use your models, you are advised to provide modeldescriptions. Click Add Model Description and then set thedocument name and URL. A maximum of three model descriptionsare supported.

Figure 4-3 Setting the inference specifications and model description

d. Check the information and click Next. The model is imported.In the model list, you can view the imported model and its version. Whenthe model status changes to Normal, the model is successfully imported.On this page, you can create new versions, quickly deploy models, publishmodels to the market, and perform other operations.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 102

Page 108: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Follow-Up Procedure● Model Deployment: On the Models page, click the triangle next to a model

name to view all versions of the model. Locate the row that contains thetarget version, click Deploy in the Operation column, and select thedeployment type configured when importing the model from the drop-downlist box. On the Deploy page, set parameters by referring to ModelDeployment Overview.

4.3.2 Importing a Meta Model from a TemplateBecause the configurations of models of the same function are similar, ModelArtsintegrates the configurations of such models into a common template. By usingthis template, you can easily and quickly import models without compiling theconfig.json configuration file.

Background● Because the configurations of models of the same function are similar,

ModelArts integrates the configurations of such models into a commontemplate. By using this template, you can easily and quickly import themodel. For details about the template, see Model Template Overview.

● For details about supported templates, see Supported Templates. For detailsabout the input and output modes of each template, see Supported Inputand Output Modes.

● Ensure that you have uploaded the model to OBS based on the modelpackage specifications of the corresponding template.

● Ensure that the OBS directory you use and ModelArts are in the same region.● Importing and managing models is free of charge and does not generate fees.

Procedure1. Log in to the ModelArts management console, and choose Model

Management > Models in the left navigation pane. The Models page isdisplayed.

2. Click Import in the upper left corner. The Import page is displayed.3. On the Import page, set related parameters.

a. Enter basic information about the model. For details about theparameters, see Table 4-4.

Table 4-4 Parameters of the model basic information

Parameter Description

Name Model name. The value can contain 1 to 64 visiblecharacters, including Chinese characters. Only letters,Chinese characters, digits, hyphens (-), and underscores(_) are allowed.

Version Version of the model to be created. For the first import,the default value is 0.0.1.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 103

Page 109: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Label Model label. A maximum of five model labels aresupported.

Description Brief description of the model

b. Select the meta model source and set related parameters. Meta Model

Source has four options based on the scenario. For details, see FourMethods of Importing a Model. Set Meta Model Source to Template.For details about the parameters, see Table 4-5.

Figure 4-4 Setting Meta Model Source to Template

Table 4-5 Parameters of the meta model source

Parameter Description

ModelTemplate

Select a template from the existing ModelArts templatelist, such as TensorFlow-based image classificationtemplate.ModelArts also provides three filter criteria: Type, Engine,and Environment, helping you quickly find the desiredtemplate. If the three filter criteria cannot meet yourrequirements, you can enter keywords to search for thetarget template. For details about supported templates,see Supported Templates.

ModelDirectory

OBS path where a model is savedNOTE

If a training job is executed for multiple times, different versiondirectories are generated, such as V001 and V002, and thegenerated models are stored in the model folder in differentversion directories. When selecting model files, specify the modelfolder in the corresponding version directory.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 104

Page 110: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Input andOutputMode

If the default input and output mode of the selectedtemplate can be overwritten, you can select an input andoutput mode based on the model function or applicationscenario. Input and Output Mode is an abstract of theAPI (apis) in config.json. It describes the interfaceprovided by the model for external inference. An inputand output mode describes one or more APIs, andcorresponds to a template.For example, for TensorFlow-based image classificationtemplate, Input and Output Mode supports Built-inimage processing mode. The input and output modecannot be modified in the template. Therefore, you canonly view but not modify the default input and outputmode of the template on the page.For details about the supported input and output modes,see Supported Input and Output Modes.

Deployment Type

After the model is imported, select the service type thatthe model is deployed. When deploying a service, you canonly deploy the service type selected here. For example, ifyou only select Real-time services here, you can onlydeploy real-time services after importing the model.ModelArts currently supports the following deploymenttypes: Real-time services, Batch services, and Edgeservices.

c. Set the inference specifications and model description.

▪ Min. Inference Specs: If your model requires certain resources tocomplete inference, you can configure this parameter to set theminimum specifications required for normal inference after themodel is deployed as a service. In later versions, the system willallocate resources based on the inference specifications in servicedeployment. You can also modify the specifications as requiredduring deployment. Note that the specifications configured here arevalid only when real-time services are deployed and the dedicatedresource pool is used or when edge services are deployed.

▪ Model Description: To help other model developers betterunderstand and use your models, you are advised to provide modeldescriptions. Click Add Model Description and then set thedocument name and URL. A maximum of three model descriptionsare supported.

Figure 4-5 Setting the inference specifications and model description

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 105

Page 111: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

d. Check the information and click Next. The model is imported.

In the model list, you can view the imported model and its version. Whenthe model status changes to Normal, the model is successfully imported.On this page, you can create new versions, quickly deploy models, publishmodels to the market, and perform other operations.

Follow-Up Procedure● Model Deployment: On the Models page, click the triangle next to a model

name to view all versions of the model. Locate the row that contains thetarget version, click Deploy in the Operation column, and select thedeployment type configured when importing the model from the drop-downlist box. On the Deploy page, set parameters by referring to ModelDeployment Overview.

4.3.3 Importing a Meta Model from a Container ImageFor AI engines that are not supported by ModelArts, you can import the modelyou compile to ModelArts from custom images.

Prerequisites● The configuration must be provided for a model that you have developed and

trained. The file must comply with ModelArts specifications. For details aboutthe specifications, see Specifications for Compiling the ModelConfiguration File. After the compilation is completed, upload the file to thespecified OBS directory.

● Ensure that the OBS directory you use and ModelArts are in the same region.

Procedure1. Log in to the ModelArts management console, and choose Model

Management > Models in the left navigation pane. The Models page isdisplayed.

2. Click Import in the upper left corner. The Import page is displayed.

3. On the Import page, set related parameters.

a. Enter basic information about the model. For details about theparameters, see Table 4-6.

Table 4-6 Parameters of the model basic information

Parameter Description

Name Model name. The value can contain 1 to 64 visiblecharacters, including Chinese characters. Only letters,Chinese characters, digits, hyphens (-), and underscores(_) are allowed.

Version Version of the model to be created. For the first import,the default value is 0.0.1.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 106

Page 112: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Label Model label. A maximum of five model labels aresupported.

Description Brief description of the model

b. Select the meta model source and set related parameters. Meta Model

Source has four options based on the scenario. For details, see FourMethods of Importing a Model. Set Meta Model Source to Containerimage. For details about the parameters, see Table 4-7.

Figure 4-6 Setting Meta Model Source to Container image

Table 4-7 Parameters of the meta model source

Parameter Description

Container ImagePath Click to import the model image from the

container image. The model is of the Image type,and you do not need to use swr_location in theconfiguration file to specify the image location.NOTE

The model image you select will be shared with theadministrator, so ensure you have the permission to sharethe image (images shared with other accounts areunsupported). When you deploy a service, ModelArtsdeploys the image as an inference service. Ensure thatyour image can be properly started and provide aninference interface.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 107

Page 113: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

DeploymentType

After the model is imported, select the service typethat the model is deployed. When deploying aservice, you can only deploy the service typeselected here. For example, if you only select Real-time services here, you can only deploy real-timeservices after importing the model. ModelArtscurrently supports the following deployment types:Real-time services, Batch services, and Edgeservices.

ConfigurationFile

The Import from OBS and Edit online methods areavailable. The configuration file must comply withcertain specifications in Model PackageSpecifications. If you select Import from OBS, youneed to specify the OBS path for storing theconfiguration file. You can enable ViewConfiguration File to view or edit the configurationfile online.

ParameterConfiguration Click on the right to view the input and output

parameters of the model.

c. Set the inference specifications and model description.

▪ Min. Inference Specs: If your model requires certain resources tocomplete inference, you can configure this parameter to set theminimum specifications required for normal inference after themodel is deployed as a service. In later versions, the system willallocate resources based on the inference specifications in servicedeployment. You can also modify the specifications as requiredduring deployment. Note that the specifications configured here arevalid only when real-time services are deployed and the dedicatedresource pool is used or when edge services are deployed.

▪ Model Description: To help other model developers betterunderstand and use your models, you are advised to provide modeldescriptions. Click Add Model Description and then set thedocument name and URL. A maximum of three model descriptionsare supported.

Figure 4-7 Configuring the inference specifications and modeldescription

d. Check the information and click Next. The model is imported.

In the model list, you can view the imported model and its version. Whenthe model status changes to Normal, the model is successfully imported.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 108

Page 114: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

On this page, you can create new versions, quickly deploy models, publishmodels to the market, and perform other operations.

Follow-Up Procedure● Model Deployment: On the Models page, click the triangle next to a model

name to view all versions of the model. Locate the row that contains thetarget version, click Deploy in the Operation column, and select thedeployment type configured when importing the model from the drop-downlist box. On the Deploy page, set parameters by referring to ModelDeployment Overview.

4.3.4 Importing a Meta Model from OBSIn scenarios where common frameworks are used for model development andtraining, you can import the model to ModelArts for unified management.

Prerequisites● The model has been developed and trained, and the type and version of the

AI engine it uses is supported by ModelArts. Common engines supported byModelArts and their runtime ranges are described as follows:

Table 4-8 Supported AI engines and their runtime

Engine Runtime Precautions

TensorFlow python3.6python2.7tf1.13-python2.7-gputf1.13-python2.7-cputf1.13-python3.6-gputf1.13-python3.6-cpu

TensorFlow 1.8.0 is used inpython2.7 and python3.6. Thedefault runtime is python2.7.

MXNet python3.6python2.7

MXNet 1.2.1 is used in python2.7and python3.6. The defaultruntime is python2.7.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 109

Page 115: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Engine Runtime Precautions

Caffe python2.7python3.6python2.7-gpupython3.6-gpupython2.7-cpupython3.6-cpu

Caffe 1.0.0 is used in python2.7,python3.6, python2.7-gpu,python3.6-gpu, python2.7-cpu,and python3.6-cpu.python 2.7 and python 3.6 canonly be used to run modelsapplicable to CPU. You are advisedto use the runtime of python2.7-gpu, python3.6-gpu, python2.7-cpu, and python3.6-cpu. Thedefault runtime is python2.7.

Spark_MLlib python2.7python3.6

Spark_MLlib 2.3.2 is used inpython2.7 and python3.6. Thedefault runtime is python2.7.

Scikit_Learn python2.7python3.6

Scikit_Learn 0.18.1 is used inpython2.7 and python3.6. Thedefault runtime is python2.7.

XGBoost python2.7python3.6

XGBoost 0.80 is used in python2.7and python3.6. The defaultruntime is python2.7.

PyTorch python2.7python3.6

PyTorch 1.0 is used in python2.7and python3.6. The defaultruntime is python2.7.

● The imported model, inference code, and configuration file must comply with

the requirements of ModelArts. For details, see Model PackageSpecifications, Specifications for Compiling the Model Configuration File,and Specifications for Compiling Model Inference Code.

● The model package that has completed training, inference code, andconfiguration file have been uploaded to the OBS directory.

● Ensure that the OBS directory you use and ModelArts are in the same region.

Procedure1. Log in to the ModelArts management console, and choose Model

Management > Models in the left navigation pane. The Models page isdisplayed.

2. Click Import in the upper left corner. The Import page is displayed.3. On the Import page, set related parameters.

a. Enter basic information about the model. For details about theparameters, see Table 4-9.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 110

Page 116: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Table 4-9 Parameters of the model basic information

Parameter Description

Name Model name. The value can contain 1 to 64 visiblecharacters, including Chinese characters. Only letters,Chinese characters, digits, hyphens (-), and underscores(_) are allowed.

Version Version of the model to be created. For the first import,the default value is 0.0.1.

Label Model label. A maximum of five model labels aresupported.

Description Brief description of the model

b. Select the meta model source and set related parameters. Meta Model

Source has four options based on the scenario. For details, see FourMethods of Importing a Model. Set Meta Model Source to OBS. Fordetails about the parameters, see Table 4-10.For the meta model imported from OBS, you need to compile theinference code and configuration file by referring to Model PackageSpecifications and place the inference code and configuration files in themodel folder storing the meta model. If the selected directory does notcontain the corresponding inference code and configuration files, themodel cannot be imported.

Figure 4-8 Setting Meta Model Source to OBS

Table 4-10 Parameters of the meta model source

Parameter Description

MetaModel

Select the model storage path. This path is the trainingoutput path specified in the training job.

AI Engine The corresponding AI engine is automatically associatedbased on the selected meta model storage path.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 111

Page 117: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Deployment Type

After the model is imported, select the service type thatthe model is deployed. When deploying a service, you canonly deploy the service type selected here. For example, ifyou only select Real-time services here, you can onlydeploy real-time services after importing the model.ModelArts currently supports the following deploymenttypes: Real-time services, Batch services, and Edgeservices.

Configuration File

By default, the system associates the configuration filestored in OBS. Enable the function to view, edit, or importthe model configuration file from OBS.

ParameterConfiguration

Click on the right to view the input and outputparameters of the model.

RuntimeDependency

List the dependencies of the selected model on theenvironment. For example, if tensorflow is used and theinstallation method is pip, the version must be 1.8.0 orlater.

c. Set the inference specifications and model description.

▪ Min. Inference Specs: If your model requires certain resources tocomplete inference, you can configure this parameter to set theminimum specifications required for normal inference after themodel is deployed as a service. In later versions, the system willallocate resources based on the inference specifications in servicedeployment. You can also modify the specifications as requiredduring deployment. Note that the specifications configured here arevalid only when real-time services are deployed and the dedicatedresource pool is used or when edge services are deployed.

▪ Model Description: To help other model developers betterunderstand and use your models, you are advised to provide modeldescriptions. Click Add Model Description and then set thedocument name and URL. A maximum of three model descriptionsare supported.

Figure 4-9 Selecting the inference specifications and adding modeldescription

d. Check the information and click Next. The model is imported.In the model list, you can view the imported model and its version. Whenthe model status changes to Normal, the model is successfully imported.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 112

Page 118: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

On this page, you can create new versions, quickly deploy models, publishmodels to the market, and perform other operations.

Follow-Up Procedure● Model Deployment: On the Models page, click the triangle next to a model

name to view all versions of the model. Locate the row that contains thetarget version, click Deploy in the Operation column, and select thedeployment type configured when importing the model from the drop-downlist box. On the Deploy page, set parameters by referring to ModelDeployment Overview.

4.4 Managing Model VersionsTo facilitate source tracing and repeated model tuning, ModelArts provides themodel version management function. You can manage models based on versions.

PrerequisitesYou have imported a model to ModelArts,

Creating a New VersionOn the Model Management > Models page, click Create New Version in theOperation column. The Create New Version page is displayed. Set relatedparameters by referring to Importing Models and click Next.

Deleting a VersionOn the Model Management > Models page, click the triangle on the left of themodel name to expand a model version list. In the model version list, click Deletein the Operation column to delete the corresponding version.

A deleted version cannot be recovered. Exercise caution when performing this operation.

4.5 Publishing a ModelModels imported to ModelArts can be published to AI Market or submitted tochallenges.

PrerequisitesYou have imported a model to ModelArts, and the model has at least one version.

Publishing to AI MarketModelArts provides the AI Market function for you to share your models with allModelArts users. You can also use what other users share in AI Market to quicklycomplete modeling. After training and importing a model, you can publish themodel to AI Market for sharing.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 113

Page 119: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

1. Log in to the ModelArts management console, and choose ModelManagement > Models in the left navigation pane. The Models page isdisplayed.

2. Click the small triangle icon on the left of the model to expand a modelversion list. Choose Publish > Publish to Market in the Operation column toaccess AI Market.

In the CN North-Beijing1 region, both old and new AI Markets exist, and you need tochoose to which AI Market you want to publish a model in the displayed dialog box.You are advised to publish the model to new AI Market. Currently, models cannot bepublished to old AI Market. In the CN North-Beijing4 region, only new AI Market issupported. After you click Publish to Market, the model is published to new AIMarket.

3. After entering AI Market, click Create Model to publish your model to themarket and share it with other users. For details about how to use new AIMarket, see AI Market.

Submitting to ChallengesThe HUAWEI CLOUD AI Contest organizes some contests for developers. You candevelop models on ModelArts and submit them to challenges.

1. Log in to the ModelArts management console, and choose ModelManagement > Models in the left navigation pane. The Models page isdisplayed.

2. Click the small triangle icon on the left of the model to expand a modelversion list. Choose Publish > Submit to Challenges in the Operationcolumn to access AI Market.

3. In the following dialog box, set Challenges, confirm model information, andclick OK.

Figure 4-10 Submitting a model to challenges

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 114

Page 120: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

4.6 Model Compression and Conversion

4.6.1 Compressing and Converting ModelsTo obtain higher computing power, you can deploy the models created onModelArts or a local PC on the Ascend chip, ARM, or GPU. In this case, you needto compress or convert the models to the required formats before deploying them.

ModelArts supports model conversion, allowing you to convert a model to arequired format before deploying the model on a chip with higher computingpower and performance.

Model conversion is applicable to the following scenarios:

● If you use the Caffe (in .caffemodel format) or TensorFlow framework (infrozen_graph or saved_model format) to train a model, you can convert themodel to the .om format. The converted model can be deployed and run onAscend chips.

● If you use the TensorFlow framework to train a model (in frozen_graph orsaved_model format), you can convert the model to the .tflite format. Theconverted model can be deployed and run on ARM.

● If you use the TensorFlow framework to train a model (in frozen_graph orsaved_model format), you can convert the model to the TensorRT format.The converted model can be deployed and run on the NVIDIA Tesla P4 GPU.

Background● Only the following types of chips are supported for model conversion: Ascend,

ARM, and GPU.● Only models trained by the Caffe and TensorFlow frameworks can be

converted.● ModelArts provides conversion templates for you to choose. For details about

the supported templates, see Conversion Templates.● The .tflite and TensorRT formats support fewer operators and quantization

operators. Therefore, some models may fail to be converted. If the conversionfails, view the log dialog box or check error logs in the conversion outputdirectory.

● For details about constraints on converting the models run on the Ascendchip, see Constraints and Parameters in the Ascend Developer Community.

● An OBS directory must be specified in compression/conversion tasks. Ensurethat the OBS directory you use and ModelArts are in the same region.

ConstraintsModels trained using built-in algorithms cannot be converted.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 115

Page 121: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Creating a Model Compression/Conversion Task1. Log in to the ModelArts management console, and choose Model

Management > Compression/Conversion in the left navigation pane. TheCompression/Conversion page is displayed.

2. Click Create Task in the upper left corner to create a task.3. On the Create Task page that is displayed, set the required parameters based

on Table 4-11.

Table 4-11 Parameter description

Parameter Description

Name Name of a model conversion task

Description Description of a model conversion task

Conversion Template ModelArts provides various templates to definemodel conversion and the parameters requiredduring the conversion.Conversion Templates details the supported modelconversion templates. You can select a templatefrom the template list. Alternatively, you can entera keyword in the search box to search for atemplate, or select a template based on the chiptype, framework type, or model file format.● Chip type: ModelArts conversion templates

support Ascend, ARM, and GPU chips.● Framework type: The conversion templates

generate models in different formats based ondifferent frameworks. The TensorFlow and Caffeframeworks are supported.

● Model file format: The supported model fileformats are listed in the drop-down list. Select aformat from the drop-down list. Currently, thecaffemodel, frozen_gragh, and tf_servingformats are supported.

Conversion InputPath

Path to the model to be converted. The path mustbe an OBS path and comply with the ModelArtsspecifications. For details about the specifications,see Model Input Path Specifications.

Conversion OutputPath

Path to the converted model. The path mustcomply with the ModelArts specifications. Fordetails about the specifications, see Model OutputPath Description.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 116

Page 122: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Advanced Settings ModelArts allows you to configure advancedsettings for different conversion templates, forexample, the precision.Different conversion templates support differentadvanced settings. For details about the parameterssupported by each template, see ConversionTemplates.

Figure 4-11 Creating a model compression/conversion task

4. After entering the task information, click Next in the lower right corner.After the task is created, the system automatically switches to theCompression/Conversion page. The created conversion task is displayed onthe page and is in the Initializing status. The conversion task takes severalminutes to complete. When the task status changes to Successful, the task iscompleted and the model has been converted.If the task status changes to Failed, click the task name to go to the taskdetails page, view the log information, adjust task parameters based on thelog information, and create another conversion task.The converted model can be used in the following scenarios:– You can import the converted model on the HiLens management console

and install the model on the HiLens Kits device.– Go to the OBS path corresponding to the conversion output path,

download the model (in .om format), and deploy it on your device.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 117

Page 123: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Deleting a Model Compression/Conversion TaskYou can delete unnecessary conversion tasks. However, tasks in the Running orInitializing status cannot be deleted.

Deleted tasks cannot be recovered. Exercise caution when performing this operation.

● Deleting a single task:On the Compression/Conversion page, click Delete in the Operation columnof the target task.

● Deleting a batch of tasks:On the Compression/Conversion page, select multiple tasks to be deletedand click Delete in the upper left corner.

4.6.2 Model Input Path SpecificationsThe converted models can run on different chips. The requirements for the modelinput path vary according to the chips. ModelArts has different requirements forthe model input path of the Ascend chip and ARM or GPU.

Ascend ChipThe requirements for converting the models run on the Ascend chip are as follows:

● For Caffe-based models, the input path must comply with the followingspecifications during model conversion:||---xxxx.caffemodel (Mandatory) Model parameter file. Only one model parameter file can exist in the input path.|---xxxx.prototxt (Mandatory) Model network file. Only one model network file can exist in the input path.|---insert_op_conf.cfg (Optional) Insertion operator configuration file. Only one insertion operator configuration file can exist in the input path.|---plugin (Optional) Custom operator directory. The input directory can contain only one plugin folder. Only custom operators developed based on Tensor Engine (TE) are supported.

● For TensorFlow-based models (in frozen_graph or saved_model format), theinput path must comply with the following specifications during modelconversion:frozen_graph format||---xxxx.pb (Mandatory) Model network file. Only one model network file can exist in the input path. The model must be in frozen_graph or saved_model format.|---insert_op_conf.cfg (Optional) Insertion operator configuration file. Only one insertion operator configuration file can exist in the input path.|---plugin (Optional) Custom operator directory. The input directory can contain only one plugin folder. Only custom operators developed based on Tensor Engine (TE) are supported.

saved_model format||---saved_model.pb (Mandatory) Model network file. Only one model network file can exist in the input path. The model must be in frozen_graph or saved_model format.|---variables (Mandatory) Fixed subdirectory name, including the model weight deviation. |---variables.index Mandatory |---variables.data-00000-of-00001 Mandatory|---insert_op_conf.cfg (Optional) Insertion operator configuration file. Only one insertion operator configuration file can exist in the input path.|---plugin (Optional) Custom operator directory. The input directory can contain only one plugin folder. Only custom operators developed based on Tensor Engine (TE) are supported.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 118

Page 124: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

ARM or GPUOnly TensorFlow-based models can be converted to run on the ARM or GPU, thatis, the models in frozen_graph and saved_model formats.

The requirements for converting the models in frozen_graph format are asfollows:||---model Directory for storing the model. The directory must be named model. Only one model directory exists and only one model-related file can be stored in the directory. |----xxx.pb Model file. The file must be in frozen_graph format of TensorFlow.|---calibration_data Directory for storing the calibration dataset. The directory must be named calibration_data. The directory is required for 8-bit conversion but not required for 32-bit conversion. Only one calibration_data directory exists in the input path. |---xx.npy Calibration dataset. The dataset can contain multiple .npy files. Ensure that the .npy files are the data directly inputted into the model after preprocessing, and the input tensors must be the same as those of the model.

The requirements for converting the models in saved_model format are asfollows:

||---model Directory for storing the model. The directory must be named model. Only one model directory exists and only one model-related file can be stored in the directory. |----saved_model.pb Model file. The file must be in saved_model format of TensorFlow. |----variables Directory for storing variables |----variables.data-******-of-***** Data required for the saved_model file |----variables.index Index required by the saved_model file|---calibration_data Directory for storing the calibration dataset. The directory must be named calibration_data. The directory is required for 8-bit conversion but not required for 32-bit conversion. Only one calibration_data directory exists in the input path. |---xx.npy Calibration dataset. The dataset can contain multiple .npy files. Ensure that the .npy files are the data directly inputted into the model after preprocessing, and the input tensors must be the same as those of the model.

4.6.3 Model Output Path DescriptionAfter a model conversion task is completed, ModelArts exports the convertedmodel to a specified OBS path. The path varies according to the conversion taskand chip, including the Ascend chip and ARM or GPU.

Ascend ChipThe following describes the output path of the model run on the Ascend chip afterconversion:

● For Caffe-based models, the output path must comply with the followingspecifications during model conversion:||---xxxx.om Converted model to run on the Ascend chip. The model file name extension is .om.|---job_log.txt Conversion log file

● For TensorFlow-based models, the output path must comply with thefollowing specifications during model conversion:||---xxxx.om Converted model to run on the Ascend chip. The model file name extension is .om.|---job_log.txt Conversion log file

ARM or GPUThe following describes the output path of the model run on the ARM or GPUafter conversion:

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 119

Page 125: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

The format for the model run on the GPU is as follows:||---model |---xxx.pb Converted model to run on the GPU. The model file name extension is .pb.|---job_log.txt Conversion log file

The format for the model run on the ARM is as follows:

||---model |---xxx.tflite Converted model to run on the ARM. The model file name extension is .tflite. |---config.json Parameters required for using the .tflite file after 8-bit conversion|---job_log.txt Conversion log file

4.6.4 Conversion Templates

Table 4-12 Model conversion templates provided by ModelArts

Template Description Advanced Settings

Caffe to Ascend Convert the modeltrained by the Caffeframework. Theconverted model canrun on the Ascendchip.

None

Tensorflowfrozen_graph toTFLite

Convert the modeltrained by theTensorFlowframework andsaved infrozen_graphformat. Theconverted model canrun on the ARM.

● inputs: Enter the model inputtensors in a character string, in theformat of input1:input2.

● outputs: Enter the model outputtensors in a character string, in theformat of output1:output2.

● precision: Select 8bit or 32bit.32bit indicates that the model isdirectly converted, and 8bitindicates that the model isquantized.

● batch_size: Enter a value tospecify the batch size. The valuemust be an integer.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 120

Page 126: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Template Description Advanced Settings

Tensorflowsaved_model toTFLite

Convert the modeltrained by theTensorFlowframework andsaved insaved_modelformat. Theconverted model canrun on the ARM.

● signature_def_key: Enter thetensor signatures in a characterstring. By default, the firstsignature is selected.

● input_saved_model_tags: Enterthe model output labels in acharacter string. By default, thefirst label is selected.

● precision: Select 8bit or 32bit.32bit indicates that the model isdirectly converted, and 8bitindicates that the model isquantized.

● batch_size: Enter a value tospecify the batch size. The valuemust be an integer.

Tensorflowfrozen_graph toTensorRT

Convert the modeltrained by theTensorFlowframework andsaved infrozen_graphformat. Theconverted model canrun on the GPU.

● inputs: Enter the model inputtensors in a character string, in theformat of input1:input2.

● outputs: Enter the model outputtensors in a character string, in theformat of output1:output2.

● precision: Select 8bit or 32bit.32bit indicates that the model isdirectly converted, and 8bitindicates that the model isquantized.

● batch_size: Enter a value tospecify the batch size. The valuemust be an integer.

Tensorflowsaved_model toTensorRT

Convert the modeltrained by theTensorFlowframework andsaved insaved_modelformat. Theconverted model canrun on the GPU.

● signature_def_key: Enter thetensor signatures in a characterstring. By default, the firstsignature is selected.

● input_saved_model_tags: Enterthe model output labels in acharacter string. By default, thefirst label is selected.

● precision: Select 8bit or 32bit.32bit indicates that the model isdirectly converted, and 8bitindicates that the model isquantized.

● batch_size: Enter a value tospecify the batch size. The valuemust be an integer.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 121

Page 127: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Template Description Advanced Settings

Tensorflowfrozen_graph toAscend

Convert the modeltrained by theTensorFlowframework andsaved infrozen_graphformat. Theconverted model canrun on the Ascend.

● input_shape: Enter the shape ofthe input data of the model. Theinput data format is NHWC, forexample, input_name:1,224,224,3.This parameter is mandatory.input_name must be the nodename in the network model beforemodel conversion.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 122

Page 128: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Template Description Advanced Settings

TF-SavedModel-To-Ascend

Convert the modeltrained by theTensorFlowframework andsaved insaved_modelformat. Theconverted model canrun on the Ascend.The customoperators (TEoperators) developedbased on TE can beused for conversion.

● input_name: Enter the shape ofthe input data of the model, forexample,input_name1:n1,c1,h1,w1;input_name2:n2,c2,h2,w2. input_namemust be the node name in thenetwork model before modelconversion. This parameter ismandatory when the model hasdynamic shape input. During theconversion, the system parses theinput model to obtain the inputtensor and prints it in the log. Ifyou do not know the input tensorof the used model, refer to theparsing result in the log.

● input_format: The default datainput format is NHWC. If the real-world format is NCHW, you needto specify the format as NCHW bysetting this parameter.

● out_nodes: specifies the outputnode, for example,node_name1:0;node_name1:1;node_name2:0. node_name must bethe node name in the networkmodel before model conversion.The digit after each colon (:)indicates the sequence number ofthe output. For example,node_name1:0 indicates the 0thoutput of node_name1. Duringthe conversion, the system parsesthe input model to obtain theoutput node and prints it in thelog. If you do not know the inputtensor of the used model, refer tothe parsing result in the log.

● net_format: specifies the preferreddata format for network operators.Possible values are ND (N cannotbe more than 4.) and 5D. Thisparameter only takes effect if theinput data of operators on thenetwork supports both ND and 5Dformats. ND indicates thatoperators in the model areconverted into the NCHW format.5D indicates that operators in the

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 123

Page 129: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Template Description Advanced Settings

model are converted into theHuawei-developed 5D format. 5Dis the default value.

● fp16_high_prec: specifies whetherto generate a high-precision FP16Da Vinci model. 0 is the defaultvalue, indicating that a commonFP16 Da Vinci model with betterinference performance isgenerated. The value 1 indicatesthat a high-precision FP16 DaVinci model with better inferenceprecision is generated.

● output_type: FP32 is the defaultvalue and is recommended forclassification and detectionnetworks. For image super-resolution networks, UINT8 isrecommended for better inferenceperformance.

ModelArtsUser Guide (AI Beginners) 4 Model Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 124

Page 130: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

5 Model Deployment

5.1 Model Deployment OverviewAfter a training job is completed and a model is generated, you can deploy themodel on the Service Deployment page. You can also deploy the model importedfrom OBS. ModelArts supports the following deployment types:● Real-Time Services

Deploy a model as a web service to provide real-time test UI and monitoringcapabilities.

● Batch ServicesA batch service can perform inference on batch data. After data processing iscompleted, the batch service automatically stops.

● Edge ServicesDeploy a model as a web service on an edge node through IntelligentEdgeFabric (IEF).

5.2 Real-Time Services

5.2.1 Deploying a Model as a Real-Time ServiceAfter a model is prepared, you can deploy the model as a real-time service andpredict and call the service.

A maximum of 20 real-time services can be created.

Prerequisites● Data has been prepared. Specifically, you have created a model in the Normal

status in ModelArts.● Ensure that the account is not in arrears. Resources are consumed when

services are running.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 125

Page 131: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Procedure1. Log in to the ModelArts management console. In the left navigation pane,

choose Service Deployment > Real-Time Services. By default, the systemswitches to the Real-Time Services page.

2. In the real-time service list, click Deploy in the upper left corner. The Deploypage is displayed.

3. On the Deploy page, set the required parameters, and then click Next.

a. Enter basic information about model deployment. For details about theparameters, see Table 5-1.

Table 5-1 Basic parameters of model deployment

Parameter Description

Billing Mode Currently, only pay-per-use billing is supported.

Name Name of the real-time service. Set this parameter asprompted.

Auto Stop After this parameter is enabled and the auto stop timeis set, a service automatically stops at the specifiedtime. If this parameter is disabled, a real-time servicekeeps running and billing. The function can help youavoid unnecessary billing. The auto stop function isenabled by default, and the default value is 1 hourlater.Currently, the options are 1 hour later, 2 hours later, 4hours later, 6 hours later, and Custom. If you selectCustom, you can enter any integer within 1 to 24 hoursin the textbox on the right.

Description Brief description of the real-time service.

Figure 5-1 Basic information about deploying a model as a real-timeservice

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 126

Page 132: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

b. Enter key information including the resource pool and modelconfigurations. For details, see Table 5-2.

Table 5-2 Parameter description

Parameter

Sub-Parameter

Description

Resource Pool

PublicResourcePools

Instances in the public resource pool can be ofthe CPU or GPU type. Pricing standards forresource pools with different instance flavors aredifferent. For details, see Product PricingDetails. Currently, the public resource pool onlysupports the pay-per-use billing mode.

Resource Pool

DedicatedResourcePools

For details about how to create a dedicatedresource pool, see Buying a Dedicated ResourcePool. You can select a specification from theresource pool specifications.

ModelandConfiguration

Models The system automatically associates with the listof available models. Select a model in theNormal state and its version.

TrafficRatio (%)

Set the traffic proportion of the node.If you deploy only one version of a model, setthis parameter to 100%. If you select multipleversions for gated launch, ensure that the sum ofthe traffic ratios of multiple versions is 100%.

InstanceFlavor

When a public resource pool is selected, CPU: 2vCPUs|8 GiB and CPU: 2 vCPUs|8 GiB GPU: 1 xP4 are available.NOTE

● If an ExeML model and version are selected, theExeML specifications (CPU) and ExeMLspecifications (GPU) flavors are available.

● To use the CPU: 2 vCPUs|8 GiB GPU: 1 x P4 flavor,submit a service ticket to apply for it

Instances Set the number of instances for the currentmodel version. If you set Instances to 1, thestandalone computing mode is used. If you setInstances to a value greater than 1, thedistributed computing mode is used. Select acomputing mode based on the actualrequirements.

EnvironmentVariable

Set environment variables and inject them to thecontainer instance.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 127

Page 133: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter

Sub-Parameter

Description

AddModelandConfiguration

ModelArts supports multiple model versions andflexible traffic policies. You can use gated launchto smoothly upgrade the model version.NOTE

If the selected model has only one version, the systemdoes not display Add Model Version andConfiguration.

Figure 5-2 Setting model information

c. After setting the parameters, click Next.

4. On the displayed Specifications page, confirm the information and clickNext. Generally, service deployment jobs run for a period of time, which maybe several minutes or tens of minutes depending on the amount of yourselected data and resources.

After a real-time service is deployed, it is started immediately. During the running, youwill be charged based on your selected resources.

You can go to the real-time service list to view the basic information aboutthe real-time service. In the real-time service list, after the status of the newlydeployed service changes from Deploying to Running, the service is deployedsuccessfully.

5.2.2 Viewing Service DetailsAfter a model is deployed as a real-time service, you can access the real-timeservice page to view the service details.

1. Log in to the ModelArts management console and choose ServiceDeployment > Real-Time Services.

2. On the Real-Time Services page, click the name of the target service. Theservice details page is displayed.

You can view the service name, status, ID, source, ratio of failed calls to totalcalls, network configuration, and description.

You can click in the Description area to edit the description.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 128

Page 134: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 5-3 Service details page

3. You can switch between tabs on the details page of a real-time service toview more details. For details, see Table 5-3.

Table 5-3 Service details

Parameter Description

Usage Guides Displays the API address, model information, inputparameters, and output parameters. You can click tocopy the API address to call the service.

Prediction Performs a prediction test on the real-time service. Fordetails, see Testing the Service.

ConfigurationUpdates

Displays Existing Configuration and HistoricalUpdates.● Existing Configuration: includes the model name,

version, status, traffic ratio, instance flavor, andinstance count.

● Historical Updates: displays historical modelinformation.

Monitoring Displays Resource Usage and Model Calls.● Resource Usage: includes the used and available

CPU, memory, and GPU resources.● Model Calls: indicates the number of model calls.

The statistics collection starts after the model statuschanges to Ready.

Event Displays key operations during service use, such as theservice deployment progress, detailed causes ofdeployment exceptions, and time points when a serviceis started, stopped, or modified.

Logs Displays the log information about each model in theservice. You can view logs generated in the latest 5minutes, latest 30 minutes, latest 1 hour, and user-defined time segment.● You can select the start time and end time when

defining the time segment.

Sharing Displays the sharing information about the service,including the users who have subscribed to the serviceand the number of service calls.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 129

Page 135: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

TracebackDiagrams

Displays the traceback diagrams between the serviceand data, training, and models.On the Traceback Diagrams page, you can view thetraceback diagrams between the service and models.In the traceback diagram area, select an element.Details about the element are displayed in the rightpane.

5.2.3 Testing the ServiceAfter a model is deployed as a real-time service, you can debug code or addimages for testing on the Prediction tab page. You can test the service usingeither of the following methods:

1. Code Prediction: If the current service is of the numerical prediction type,enter the prediction code to perform a prediction test.

2. Image Prediction: If the current service is of the image recognition predictiontype, add an image to perform a prediction test.

Code Prediction1. Log in to the ModelArts management console and choose Service

Deployment > Real-Time Services.2. On the Real-Time Services page, click the name of the target service. The

service details page is displayed. On the Prediction tab page, enter theprediction code and click Predict to perform prediction. See Figure 5-4. attr_7indicates the target column, and predictioncol indicates the prediction resultof the target column attr_7.

Figure 5-4 Prediction code

The value of attr_7 can be set to any value or left blank, which does not affect theprediction result.

Image Prediction1. Log in to the ModelArts management console and choose Service

Deployment > Real-Time Services.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 130

Page 136: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

2. On the Real-Time Services page, click the name of the target service. Theservice details page is displayed. On the Prediction tab page, click andselect an image for prediction test. After the image is uploaded successfully,click Prediction to perform a prediction test. See Figure 5-5. The labelyunbao, the position coordinates, and the confidence score are displayed.

Figure 5-5 Image prediction

5.2.4 Accessing a Real-Time ServiceIf a real-time service is in the Running status, the service is deployed successfully.You can use either of the following methods to send an inference request to thereal-time service:

Method 1: Use GUI-based Software for Inference (Postman)

Method 2: Run the cURL Command to Send an Inference Request

Method 1: Use GUI-based Software for Inference (Postman)1. Download Postman and install it, or directly add the Postman extension to

Google Chrome. (Alternatively, use other software that can send POSTrequests).

2. Open Postman. Figure 5-6 shows the Postman interface.

Figure 5-6 Postman interface

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 131

Page 137: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

3. Set parameters on Postman. The following uses image classification as anexample.– Select a POST task, and copy the API URL to the POST edit box. View the

API URL of the real-time service on the Usage Guides tab page of thereal-time service details page. On the Headers tab page, set KEY to X-Auth-Token and VALUE to the token obtained in Obtaining User Token.See Figure 5-7.

You can also use AK/SK to encrypt the service call request. For details, see AK/SKAuthentication in Authentication in the ModelArts API Reference.

Figure 5-7 Parameter settings

– On the Body tab page, there are two types of input parameters: file inputand text input.

▪ File inputSelect form-data. Set KEY to the input parameter of the model, forexample, images. Set VALUE to an image to be inferred (currently,only one image can be inferred). See Figure 5-8.

Figure 5-8 Setting parameters on the Body tab page

▪ Text inputSelect raw and then select JSON(application/json). Enter therequest body in the text box below. An example request body is asfollows:{ "meta": { "uuid": "10eb0091-887f-4839-9929-cbc884f1e20e" }, "data": { "req_data": [ { "sepal_length": 3, "sepal_width": 1, "petal_length": 2.2, "petal_width": 4 } ] }}

meta can carry uuid. When calling an API, pass a universally uniqueidentifier (UUID). When the inference result is returned, uuid is

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 132

Page 138: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

returned to trace the request. If you do not need this function, leavemeta blank. data contains the req_data array. You can pass one ormore pieces of request data. The parameters of each piece of dataare determined by the model, such as sepal_length and sepal_widthin this example.

4. After setting the parameters, click Send to send the request. The result isdisplayed in the response.– Inference result using file input: Figure 5-9 shows an example. The field

values in the return result may vary with the model.– Inference result using text input: Figure 5-10 shows an example. The

request body contains meta and data. If the request contains uuid, uuidwill be returned in the response. Otherwise, uuid is left blank. datacontains a resp_data array, which returns the inference result of one ormore pieces of input data. The parameters of each result are determinedby the model, for example, sepal_length and predictresult in thisexample.

Figure 5-9 File inference result

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 133

Page 139: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 5-10 Text inference result

Method 2: Run the cURL Command to Send an Inference Request

The command format for sending inference requests varies with file input and textinput.

1. File inputcurl -F 'images=@Image path' -H 'X-Auth-Token:Token value' -X POST Real-time service URL

– -F indicates file input. In this example, the parameter name is images,which can be changed as required. The image storage path follows @.

– -H indicates the header of the POST command. X-Auth-Token is the KEYvalue on the Headers page. Token value indicates the token obtained inToken Authentication in Obtaining Request AuthenticationInformation.

– POST is followed by the API URL of the real-time service.

The following is an example of the cURL command for inference with fileinput:curl -F 'images=@/home/data/test.png' -H 'X-Auth-Token:MIISkAY***80T9wHQ==' -X POST https://modelarts-infers-1.cn-north-1.myhuaweicloud.com/v1/infers/eb3e0c54-3dfa-4750-af0c-95c45e5d3e83

2. Text inputcurl -d '{"data":{"req_data":[{"sepal_length":3,"sepal_width":1,"petal_length":2.2,"petal_width":4}]}}' -H 'X-Auth-Token:MIISkAY***80T9wHQ==' -H 'Content-type: application/json' -X POST https://modelarts-infers-1.cn-north-1.myhuaweicloud.com/v1/infers/eb3e0c54-3dfa-4750-af0c-95c45e5d3e83

-d indicates the text input of the request body.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 134

Page 140: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

5.2.5 Publishing to AI MarketModelArts provides the AI Market function to share personal models, APIs, anddatasets to all ModelArts users. You can also obtain the shared content from AIMarket to quickly complete modeling. After deploying the model, you can publishthe real-time service APIs to AI Market for knowledge sharing. On the AI Marketpage, you can view the APIs published by other users and yourself.

PrerequisitesYou have imported a model to ModelArts, and the model has at least one version.

Procedure1. Log in to the ModelArts management console and choose Service

Deployment > Real-Time Services.2. In the Operation column of the target real-time service, choose More >

Publish to Market.3. In the dialog box that is displayed, set the required parameters. For details,

see Table 5-4.

Table 5-4 Parameter description

Parameter Description

Publisher Name of the model publisher displayed in the market. Thename cannot be changed after a model is published.

Name Model name displayed in the market

Description Brief description of the API to be published. You are advisedto describe the API in terms of application scenarios, usagemethods, and training dataset.

Keywords After model keywords are set, they can be displayed in themarket for classification and quick query of the API.You are advised to select a maximum of three mostappropriate keywords from each attribute menu to describeyour API. If there is no proper keyword, leave this parameterblank. You can select keywords in terms of the industry, data,scenario, topic, model, and engine.

Cover A cover helps other users to view the usage of the model. Youcan select a cover image from OBS or upload it from the localPC. If you have published the model before, you can use theprevious cover image.Supported image formats are JPG, PNG, GIF, and BMP. Youare advised to select a GIF image as the cover. Therecommended aspect ratio is 5:3.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 135

Page 141: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Publish To You can publish a model to AI Market or individual users.● Public: Publish a model to all users. Publishing to the

public needs to be manually reviewed. You can view thereview progress in the publishing dialog box.

● User: Publish a model to specified users.NOTE

● You can obtain the user IDs from My Credentials.● Separate multiple user IDs with commas (,). Special characters

and spaces are not allowed.

4. Click OK. The API is published.

You can view the content you have published on the My Publishes page in AIMarket on the ModelArts management console. For details, see MyPublishes.

5.3 Batch Services

5.3.1 Deploying a Model as a Batch ServiceAfter a model is prepared, you can deploy it as a batch service. The ServiceDeployment > Batch Services page lists all batch services. You can enter a servicename in the search box in the upper right corner and click to query theservice.

Prerequisites● Data has been prepared. Specifically, you have created a model in the Normal

status in ModelArts.● Data to be batch processed is ready and has been upload to an OBS directory.● At least one empty folder has been created on OBS for storing the training

output.

Background● Currently, batch services are limited-time free. The running batch services are

not billed.● A maximum of 1,000 batch services can be created.

Procedure1. Log in to the ModelArts management console. In the left navigation pane,

choose Service Deployment > Service Deployment. By default, the systemswitches to the Batch Services page.

2. In the batch service list, click Deploy in the upper left corner. The Deploypage is displayed.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 136

Page 142: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

3. On the Deploy page, set the required parameters, and then click Next.a. Set the basic information, including Name and Description. The name is

generated by default, for example, service-bc0d. You can specify Nameand Description according to actual requirements.

b. Set other parameters, including the resource pool and modelconfigurations. For details, see Table 5-5.

Table 5-5 Parameter description

Parameter Description

Model andVersion

Select the model and version that are in theNormal status.

Input Path Select the OBS directory where the data is to beuploaded. Select a folder or a .manifest file. Fordetails about the specifications of the .manifest file,see Manifest File Specifications.

Request Path API URI of a batch service.

MappingRelationship

Enter the field index corresponding to eachparameter in the CSV file. The index starts from 0.The mapping is automatically generated based onthe model file.If the model file contains any of the fileeimagesand data=json information, the mappingrelationship details are displayed.

Output Path Select the path for saving the batch predictionresult. You can select the empty folder that youcreate.

Instance Flavor CPU: 2 vCPUs|8 GiB and CPU: 2 vCPUs|8 GiB GPU:1 x P4.NOTE

● If an ExeML model and version are selected, theExeML specifications (CPU) and ExeMLspecifications (GPU) flavors are available.

● To use the CPU: 2 vCPUs|8 GiB GPU: 1 x P4 flavor,submit a service ticket to apply for it

Instances Set the number of instances for the current modelversion. If you set Instances to 1, the standalonecomputing mode is used. If you set Instances to avalue greater than 1, the distributed computingmode is used. Select a computing mode based onthe actual requirements.

EnvironmentVariable

Set environment variables and inject them to thecontainer instance.

4. After setting the parameters, click Next. The batch service is deployed.

Generally, service deployment jobs run for a period of time, which may be

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 137

Page 143: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

several minutes or tens of minutes depending on the amount of your selecteddata and resources.

After a batch service is deployed, it is started immediately. During the running, youwill be charged based on your selected resources.

You can go to the batch service list to view the basic information about thebatch service. In the batch service list, after the status of the newly deployedservice changes from Deploying to Running, the service is deployedsuccessfully.

Manifest File SpecificationsBatch services of the inference platform support the manifest file. The manifestfile describes the input and output of data.

Example input manifest file● File name: test.manifest● File content:

{"source": "s3://obs-data-bucket/test/data/1.jpg"}{"source": "https://infers-data.obs.cn-north-1.myhwclouds.com:443/xgboosterdata/data.csv?AccessKeyId=2Q0V0TQ461N26DDL18RB&Expires=1550611914&Signature=wZBttZj5QZrReDhz1uDzwve8GpY%3D&x-obs-security-token=gQpzb3V0aGNoaW5hixvY8V9a1SnsxmGoHYmB1SArYMyqnQT-ZaMSxHvl68kKLAy5feYvLDMNZWxzhBZ6Q-3HcoZMh9gISwQOVBwm4ZytB_m8sg1fL6isU7T3CnoL9jmvDGgT9VBC7dC1EyfSJrUcqfB_N0ykCsfrA1Tt_IQYZFDu_HyqVk-GunUcTVdDfWlCV3TrYcpmznZjliAnYUO89kAwCYGeRZsCsC0ePu4PHMsBvYV9gWmN9AUZIDn1sfRL4voBpwQnp6tnAgHW49y5a6hP2hCAoQ-95SpUriJ434QlymoeKfTHVMKOeZxZea-JxOvevOCGI5CcGehEJaz48sgH81UiHzl21zocNB_hpPfus2jY6KPglEJxMv6Kwmro-ZBXWuSJUDOnSYXI-3ciYjg9-h10b8W3sW1mOTFCWNGoWsd74it7l_5-7UUhoIeyPByO_REwkur2FOJsuMpGlRaPyglZxXm_jfdLFXobYtzZhbul4yWXga6oxTOkfcwykTOYH0NPoPRt5MYGYweOXXxFs3d5w2rd0y7p0QYhyTzIkk5CIz7FlWNapFISL7zdhsl8RfchTqESq94KgkeqatSF_iIvnYMW2r8P8x2k_eb6NJ7U_q5ztMbO9oWEcfr0D2f7n7Bl_nb2HIB_H9tjzKvqwngaimYhBbMRPfibvttW86GiwVP8vrC27FOn39Be9z2hSfJ_8pHej0yMlyNqZ481FQ5vWT_vFV3JHM-7I1ZB0_hIdaHfItm-J69cTfHSEOzt7DGaMIES1o7U3w%3D%3D"}

● File requirements:

a. The file name extension must be .manifest.b. The file content is in JSON format. Each row describes a piece of input

data, which must be accurate to a file instead of a folder.c. A source field must be defined for the JSON content. The field value is

the OBS URL of the file in any of the following formats:

i. s3://{{Bucket name}}/{{Object name}}: applicable to accessing OBSdata of the user.

ii. Shared link generated by OBS, including signature information. It isapplicable to accessing OBS data of other users.

Example output manifest file

If you use an input manifest file, the output directory will contain an outputmanifest file.● Assume that the output path is //test-bucket/test/. The result is stored in the

following path:OBS bucket/directory name├── test-bucket│ ├── test│ │ ├── infer-result-0.manifest

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 138

Page 144: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

│ │ ├── infer-result│ │ │ ├── 1.jpg_result.txt│ │ │ ├── 2.jpg_result.txt

● Content of the infer-result-0.manifest file:{"source": "s3://obs-data-bucket/test/data/1.jpg", "inference-loc": "s3://test-bucket/test/infer-result/1.jpg_result.txt"}{"source ": "https://infers-data.obs.cn-north-1.myhwclouds.com:443/xgboosterdata/2.jpg?AccessKeyId=2Q0V0TQ461N26DDL18RB&Expires=1550611914&Signature=wZBttZj5QZrReDhz1uDzwve8GpY%3D&x-obs-security-token=gQpzb3V0aGNoaW5hixvY8V9a1SnsxmGoHYmB1SArYMyqnQT-ZaMSxHvl68kKLAy5feYvLDMNZWxzhBZ6Q-3HcoZMh9gISwQOVBwm4ZytB_m8sg1fL6isU7T3CnoL9jmvDGgT9VBC7dC1EyfSJrUcqfB_N0ykCsfrA1Tt_IQYZFDu_HyqVk-GunUcTVdDfWlCV3TrYcpmznZjliAnYUO89kAwCYGeRZsCsC0ePu4PHMsBvYV9gWmN9AUZIDn1sfRL4voBpwQnp6tnAgHW49y5a6hP2hCAoQ-95SpUriJ434QlymoeKfTHVMKOeZxZea-JxOvevOCGI5CcGehEJaz48sgH81UiHzl21zocNB_hpPfus2jY6KPglEJxMv6Kwmro-ZBXWuSJUDOnSYXI-3ciYjg9-h10b8W3sW1mOTFCWNGoWsd74it7l_5-7UUhoIeyPByO_REwkur2FOJsuMpGlRaPyglZxXm_jfdLFXobYtzZhbul4yWXga6oxTOkfcwykTOYH0NPoPRt5MYGYweOXXxFs3d5w2rd0y7p0QYhyTzIkk5CIz7FlWNapFISL7zdhsl8RfchTqESq94KgkeqatSF_iIvnYMW2r8P8x2k_eb6NJ7U_q5ztMbO9oWEcfr0D2f7n7Bl_nb2HIB_H9tjzKvqwngaimYhBbMRPfibvttW86GiwVP8vrC27FOn39Be9z2hSfJ_8pHej0yMlyNqZ481FQ5vWT_vFV3JHM-7I1ZB0_hIdaHfItm-J69cTfHSEOzt7DGaMIES1o7U3w%3D%3D", "inference-loc": "s3://test-bucket/test/infer-result/2.jpg_result.txt"}

● File format:

a. The file name is infer-result-{{index}}.manifest, where index is theinstance ID. Each running instance of a batch service generates amanifest file.

b. The infer-result directory is created in the manifest directory to store theresult.

c. The file content is in JSON format. Each row describes the output resultof a piece of input data.

d. The content contains two fields: source and inference-loc.

i. source: input data description, which is the same as that of the inputmanifest file

ii. inference-loc: output result path in the format of s3://{{Bucketname}}/{{Object name}}

5.3.2 Viewing the Batch Service Prediction ResultWhen deploying a batch service, you can select the location of the output datadirectory. You can view the running result of the batch service that is in theRunning completed status.

1. Log in to the ModelArts management console and choose ServiceDeployment > Batch Services.

2. Click the name of the target service in the Running completed status. Theservice details page is displayed.– You can view the service name, status, ID, input path, output path,

network configuration, and description.

– You can click in the Description area to edit the description.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 139

Page 145: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 5-11 Service details page

3. Click View Details next to Output Path to obtain the batch service predictionresult.– If images are entered, a result file is generated for each image in the

Image name__result.txt format, for example,IMG_20180919_115016.jpg_result.txt.

– If audio files are entered, a result file is generated for each audio file inthe Audio file name__result.txt format, for example, 1-36929-A-47.wav_result.txt.

– If table data is entered, the result file is generated in the Tablename__result.txt format, for example, train.csv_result.txt.

5.4 Edge Services

5.4.1 Deploying an Edge ServiceAfter the model is prepared, you can deploy it as an edge service. The ServiceDeployment > Edge Services page lists all edge services. You can enter a servicename in the search box in the upper right corner and click to query theservice. Edge services depend on Intelligent EdgeFabric (IEF) on HUAWEI CLOUD.Before deploying an edge service, create an edge node on IEF.

Prerequisites● Data has been prepared. Specifically, you have created a model in the Normal

status in ModelArts.● An edge node has been created on IEF. If you have not created any edge

node, create an edge node first. For details, see Creating an Edge Node.● Ensure that the account is not in arrears. Resources are consumed when

services are running.

Background● Currently, edge services are limited-time free. The running edge services are

not billed.● A maximum of 1,000 edge services can be created.

Procedure1. Log in to the ModelArts management console. In the left navigation pane,

choose Service Deployment > Edge Services. By default, the system switchesto the Edge Services page.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 140

Page 146: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

2. In the edge service list, click Deploy in the upper left corner. The Deploy pageis displayed.

3. On the Deploy page, set the required parameters, and then click Next.

a. Set the basic information, including Name and Description. The name isgenerated by default, for example, service-bc0d. You can specify Nameand Description according to actual requirements.

b. Set other parameters, including the resource pool and modelconfigurations. For details, see Table 5-6.

Table 5-6 Parameter description

Parameter Description

Model andConfiguration

Select the model and version that are in theNormal status.

Instance Flavor The following specifications are supported:● CPU: 2 vCPUs | 8GiB● CPU: 2 vCPUs 8GiB GPU: 1 x P4● Custom: If you select Custom, set the following

parameters as required: CPU, Memory, GPU, andAscend Either GPU or Ascend can be set.

NOTE● To use the CPU: 2 vCPUs|8 GiB GPU: 1 x P4 flavor,

submit a service ticket to apply for it

EnvironmentVariable

Set environment variables and inject them to thecontainer instance.

Edge Node Edge nodes are your edge computing devices usedto run edge applications, process your data, andcollaborate with cloud applications securely andconveniently.Click Add. In the Add Node dialog box that isdisplayed, select an edge node. Select a creatednode and click OK.

4. After setting the parameters, click Next. The edge service is deployed.

Generally, service deployment jobs run for a period of time, which may beseveral minutes or tens of minutes depending on the amount of your selecteddata and resources.You can go to the edge service list to view the basic information about theedge service. In the edge service list, after the status of the newly deployedservice changes from Deploying to Running, the service is deployedsuccessfully.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 141

Page 147: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 5-12 Edge service list

5.4.2 Accessing an Edge Service

Accessing an Edge Service

If the edge service and edge node are in the Running status, the edge service hasbeen successfully deployed on the edge node.

You can use either of the following methods to send an inference request to theedge service deployed on the edge node in a network that can access the edgenode:

● Method 1: Use GUI-based Software for Inference (Postman)● Method 2: Run the cURL Command to Send an Inference Request

Method 1: Use GUI-based Software for Inference (Postman)1. Download Postman and install it, or directly add the Postman extension to

Google Chrome. (Alternatively, use other software that can send POSTrequests).

2. Open Postman. Figure 5-13 shows the Postman interface.

Figure 5-13 Postman software interface

3. Set parameters on Postman. The following uses image classification as anexample.– Select a POST task, and copy the URL of the edge node to the POST edit

box. View the URL of the edge node on the Node Info tab page of theedge service details page.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 142

Page 148: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 5-14 POST parameter settings

– On the Body tab page, there are two types of input parameters: file inputand text input.

▪ File input

Select form-data. Set KEY to the input parameter of the model, forexample, images. Set VALUE to an image to be inferred (currently,only one image can be inferred).

Figure 5-15 Entering Body configuration information

▪ Text input

Select raw and then select JSON(application/json). Enter therequest body in the text box below. An example request body is asfollows:{"meta": {"uuid": "10eb0091-887f-4839-9929-cbc884f1e20e"},"data": {"req_data": [{"sepal_length": 3,"sepal_width": 1,"petal_length": 2.2,"petal_width": 4}]}}

meta can carry uuid. When the inference result is returned, uuid isreturned to trace the request. If you do not need this function, leavemeta blank. data contains the req_data array. You can pass one ormore pieces of request data. The parameters of each piece of dataare determined by the model, such as sepal_length and sepal_widthin this example.

4. After setting the parameters, click Send to send the request. The result isdisplayed in the response.

– Figure 5-16 shows an example of the inference result using file input.The field values in the return result may vary with the model.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 143

Page 149: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 5-16 Inference result using file input for the edge service

– Figure 5-17 shows an example of the inference result using text input.The request body contains meta and data. If the request contains uuid,uuid will be returned in the response. Otherwise, uuid is left blank. datacontains the req_data array. You can pass one or more pieces of requestdata. The parameters of each piece of data are determined by the model,such as sepal_length and sepal_width in this example.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 144

Page 150: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 5-17 Inference result using text input for the edge service

Method 2: Run the cURL Command to Send an Inference RequestThe format for sending inference request commands varies with file input and textinput.

1. File inputcurl -F 'images=@Image path'-X POST Service address of the edge node

– -F indicates file input. In this example, the parameter name is images,which can be changed as required. The image storage path follows @.

– POST is followed by the URL of the edge node.The following is an example of the cURL command for inference with fileinput:curl -F 'images=@/home/data/cat.jpg' -X POST https://192.168.0.158:1032

Figure 5-18 shows the inference result.

Figure 5-18 Inference result using the cURL command with file input

2. Text inputcurl -d '{"meta": {

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 145

Page 151: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

"uuid": "10eb0091-887f-4839-9929-cbc884f1e20e"},"data": {"req_data": [{"sepal_length": 3,"sepal_width": 1,"petal_length": 2.2,"petal_width": 4}]}} '-X POST <Service address of the edge node>

– -d indicates the text input of the request body. If the model uses textinput, this parameter is mandatory.

The following is an example of the cURL command for inference with textinput:curl -d '{"meta": {"uuid": "10eb0091-887f-4839-9929-cbc884f1e20e"},"data": {"req_data": [{"sepal_length": 3,"sepal_width": 1,"petal_length": 2.2,"petal_width": 4}]}}' -X POST http://192.168.0.158:1033

Figure 5-19 shows the inference result.

Figure 5-19 Inference result using the cURL command with text input

5.5 Modifying a ServiceFor a deployed service, you can modify its basic information to match servicechanges. You can modify the basic information about a service in either of thefollowing ways:

Method 1: Modify Service Information on the Service Management Page

Method 2: Modify Service Information on the Service Details Page

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 146

Page 152: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

PrerequisitesA service has been deployed.

Method 1: Modify Service Information on the Service Management Page1. Log in to the ModelArts management console and choose Service

Deployment from the left navigation pane. Go to the service managementpage of the target service.

2. In the service list, click Modify in the Operation column of the target service,modify basic service information, and click OK.– For details about the real-time service parameters, see Deploying a

Model as a Real-Time Service.– For details about the batch service parameters, see Deploying a Model

as a Batch Service.– For details about the edge service parameters, see Deploying an Edge

Service.

Services in the Deploying status cannot be modified.

Method 2: Modify Service Information on the Service Details Page1. Log in to the ModelArts management console and choose Service

Deployment from the left navigation pane. Go to the service managementpage of the target service.

2. Click the name of the target service. The service details page is displayed.3. Click Modify in the upper right corner of the page, modify the service details,

and click OK.– For details about the real-time service parameters, see Deploying a

Model as a Real-Time Service.– For details about the batch service parameters, see Deploying a Model

as a Batch Service.– For details about the edge service parameters, see Deploying an Edge

Service.

Figure 5-20 Service operation

5.6 Starting or Stopping a Service

Starting a ServiceYou can start services in the Successful, Abnormal, or Stopped status. Services inthe Deploying status cannot be started. A service is billed when it is started and inthe Running state. You can start a service in either of the following ways:

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 147

Page 153: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

1. Log in to the ModelArts management console and choose ServiceDeployment from the left navigation pane. Go to the service managementpage of the target service. Click Start in the Operation column to start thetarget service.

2. Log in to the ModelArts management console and choose ServiceDeployment from the left navigation pane. Go to the service managementpage of the target service. Click the name of the target service. The servicedetails page is displayed. Click Start in the upper right corner of the page tostart the service.

Stopping a ServiceYou can stop services in the Running or Alarm status. Services in the Deployingstatus cannot be stopped. After a service is stopped, ModelArts stops charging.You can stop a service in either of the following ways:

1. Log in to the ModelArts management console and choose ServiceDeployment from the left navigation pane. Go to the service managementpage of the target service. Click Stop in the Operation column to stop thetarget service.

2. Log in to the ModelArts management console and choose ServiceDeployment from the left navigation pane. Go to the service managementpage of the target service. Click the name of the target service. The servicedetails page is displayed. Click Stop in the upper right corner of the page tostop the service.

5.7 Deleting a ServiceIf a service is no longer in use, you can delete it to release resources.

1. Log in to the ModelArts management console and choose ServiceDeployment from the left navigation pane. Go to the service managementpage of the target service.

a. For a real-time service, choose More > Delete in the Operation columnto delete it.

b. For a batch or edge service, click Delete in the Operation column todelete it.

2. Log in to the ModelArts management console and choose ServiceDeployment from the left navigation pane. Go to the service managementpage of the target service. Click the name of the target service. The servicedetails page is displayed. Click Stop in the upper right corner of the page tostop the service.

A deleted service cannot be recovered. Exercise caution when performing thisoperation.

ModelArtsUser Guide (AI Beginners) 5 Model Deployment

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 148

Page 154: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

6 AI Market (Old Version)

ModelArts AI Market provides common datasets, and lists models and APIs sharedby users. You can use the data source shared by others to quickly build models. Inaddition, you can publish your own APIs or models to the market.

● ModelArts has launched a new version of the AI market, which is available in the CNNorth-Beijing1 and CN North-Beijing4 regions. The old version of the AI market isavailable only in CN North-Beijing1. For details about the differences between the newand old versions, see Differences Between the New and Old Versions of the ModelArtsAI Market. For details about the new version of the AI market, see AI Market.

● Because both the new and old versions of the AI market exist in the CN North-Beijing1region, you need to select the new or old version in the displayed window when usingthe AI market functions, such as, releasing models to the market or entering the AImarket. You are advised to use the new version because it provides better performanceand user experience.

ModelsOn the ModelArts management console, choose AI Market. On the Models tabpage, select your desired model. You can perform the following operations on theModels tab page:

● Search for a model: Enter a model name or type in the search box and click to search for the model. Models related to the search keyword are

displayed. You can also click the filter criteria below to search for models.● View a model: Click a model name to go to the model details page. The

model details page displays the basic information and usage description ofthe model.

● Publish a model business card: Click Publish Model Business Card in theupper right corner of the page to advertise enterprise business offerings. Topublish an inference model, import the model on the Model Managementpage and publish it. If you publish a model business card, it does not containthe specific model.

● Save as my model: Click the model name to go to the model details page,click Save as My Model. In the Save as My Model dialog box that isdisplayed, enter the name, version, and description of the model and click OKto import the model to the Model Management module.

ModelArtsUser Guide (AI Beginners) 6 AI Market (Old Version)

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 149

Page 155: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

The model name consists of 1 to 64 visible characters. It must start with an uppercaseor lowercase English letter, or a Chinese character. Only uppercase and lowercaseletters, Chinese characters, digits, hyphens (-), and underscores (_) are allowed.The version cannot be left blank. An example version is 0.0.1.

APIsOn the ModelArts management console, choose AI Market. On the APIs tab page,select your desired API. You can perform the following operations on the APIs tabpage:

● Search for an API: Enter an API name or type in the search box and click to search for the API. APIs related to the search keyword are displayed.

● View an API: Click an API name to go to the API details page. The API detailsare displayed.

● Subscribe to an API: Click an API name to go to the API details page. ClickSubscribe. In the dialog box that is displayed, enter the API name to subscribeto the API.

My PublishesOn the ModelArts management console, choose AI Market in the left navigationpane and click My Publishes in the upper right corner. On the page that isdisplayed, you can view the models and APIs you have published. See Figure 6-1.On the My Publishes page, you can view the category, name, status, rating, anddownloads of the published objects. You can also perform operations on andsearch for published objects.

Figure 6-1 My Publishes

ModelArtsUser Guide (AI Beginners) 6 AI Market (Old Version)

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 150

Page 156: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

7 AI Market (New Version)

ModelArtsUser Guide (AI Beginners) 7 AI Market (New Version)

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 151

Page 157: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

8 Resource Pools

ModelArts Resource PoolsWhen using ModelArts to implement AI Development Lifecycle, you can use twodifferent resource pools to train and deploy models.

● Public Resource Pool: provides public large-scale computing clusters, whichare allocated on demand based on job parameter settings. Resources areisolated by job. Billing of public resource pools is based on the resourcespecifications, duration, and instance quantity, regardless of the tasks(including training, deployment, and development) where the public resourcepools are used. Public resource pools are provided by ModelArts by defaultand do not need to be created or configured. You can directly select a publicresource pool during AI development.

● Dedicated Resource Pool: provides exclusive computing resources, which canbe used for notebook instances, training jobs and model deployment. Itdelivers higher efficiency and cannot be shared with other users.Buy a dedicated resource pool before using it. Select the resource pool youhave bought during AI development. For details about the dedicated resourcepool, see Dedicated Resource Pool, Buying a Dedicated Resource Pool,Scaling a Dedicated Resource Pool, and Deleting a Dedicated ResourcePool.

Dedicated Resource Pool● Dedicated resource pools can be used in the following jobs and tasks:

notebook, training, TensorBoard, and deployment (including real-timeservices, edge services, and offline services)

● Dedicated resource pools are classified into two types: Dedicated forDevelopment/Training and Dedicated for Service Deployment. TheDedicated for Development/Training type can be used only for theNotebook, Training Jobs, and TensorBoard functions. The Dedicated forService Deployment type can be used only for model deployment.

● Dedicated resource pools are available only when they are in the Runningstate. If a dedicated resource pool is unavailable or abnormal, rectify the faultbefore using it.

● After a dedicated resource pool is created, the billing starts based on theselected specifications.

ModelArtsUser Guide (AI Beginners) 8 Resource Pools

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 152

Page 158: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

● Dedicated resource pools support both the pay-per-use and monthly/yearlybilling modes. For details, see Pricing Details.

Buying a Dedicated Resource Pool1. Log in to the ModelArts management console and choose Dedicated

Resource Pools on the left.2. On the Dedicated Resource Pools page, select Dedicated for Development/

Training or Dedicated for Service Deployment.3. Click Create in the upper left corner. The page for buying a dedicated

resource pool is displayed.4. Set the parameters on the page. For details about how to set parameters, see

Table 8-1 and Table 8-2.

Table 8-1 Parameters of the Dedicated for Development/Training type

Parameter Description

Resource Type The default value is Dedicated for Development/Training and cannot be changed.

Billing Mode Select a billing mode, Yearly/Monthly or Pay-per-use.

Name Name of a dedicated resource pool.The name consists of lowercase letters, digits, hyphens(-), and underscores (_). It must start with a lowercaseletter and cannot end with a hyphen (-) or underscore(_).

Description Brief description of a dedicated resource pool.

Nodes Select the number of nodes in a dedicated resourcepool. More nodes mean higher computing performanceand a higher cost.

NodeSpecifications

Currently, only modelarts.vm.gpu.p100 | 56cores |512GiB | 1*P100 is supported.

Required Duration Select the time length when you want to use theresource pool. This parameter is mandatory only whenthe Yearly/Monthly billing mode is selected.The duration ranges from one month to one year.ModelArts provides a 1-year preference package, whichallows you to enjoy the product for 1 year by payingfor only 10 months.

Table 8-2 Parameters of the Dedicated for Service Deployment type

Parameter Description

Resource Type The default value is Dedicated for ServiceDeployment and cannot be changed.

ModelArtsUser Guide (AI Beginners) 8 Resource Pools

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 153

Page 159: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Description

Billing Mode Only the Pay-per-use billing mode is supported.

Name Name of a dedicated resource pool.The name consists of lowercase letters, digits, hyphens(-), and underscores (_). It must start with a lowercaseletter and cannot end with a hyphen (-) or underscore(_).

Description Brief description of a dedicated resource pool.

Custom NetworkConfiguration

If you enable Custom Network Configuration, theservice instance runs on the specified network and cancommunicate with other cloud service resourceinstances on the network. If you do not enable CustomNetwork Configuration, ModelArts allocates adedicated network to each user and isolates users fromeach other.If you enable Custom Network Configuration, setVPC, Subnet, and Security Group. If no network isavailable, go to the VPC service and create a network.

Nodes Select the number of nodes in a dedicated resourcepool. More nodes mean higher computing performanceand a higher cost.

NodeSpecifications

Currently, modelarts.vm.cpu.8ud | 8cores | 32GiB andmodelarts.vm.gpu.p4u8 | 8cores | 32GiB | 1*P4 aresupported. Select one based on site requirements.

5. Click Next. The specifications confirmation dialog box is displayed.6. After confirming the specifications, click Pay Now, and then complete the

payment on the payment page.You can view the resource pool you created in the dedicated resource pool list.After a dedicated resource pool is created, its status changes to Running.

Scaling a Dedicated Resource PoolAfter a dedicated resource pool is used for a period of time, you can expand orreduce the capacity of the resource pool by increasing or decreasing the numberof nodes.

A dedicated resource pool in Yearly/Monthly billing mode does not supportscaling. If you buy a dedicated resource pool in Pay-per-use billing mode, youneed to pay for the nodes after scaling.

The procedure for scaling is as follows:

1. Go to the dedicated resource pool management page, locate the row thatcontains the desired dedicated resource pool, and click Scale in the Operationcolumn.

2. On the scaling page, increase or decrease the number of nodes. Increasing thenode quantity scales out the resource pool whereas decreasing the node

ModelArtsUser Guide (AI Beginners) 8 Resource Pools

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 154

Page 160: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

quantity scales in the resource pool. Scale the capacity based on servicerequirements.– During scale-out, select the quota of the current account and increase

the number of nodes. Otherwise, the scale-out will fail.– During scale-in, switch off the desired node in the Operation column to

delete the node. To reduce one node, you need to switch off the node inNode List to delete the node. See Figure 8-1.

Figure 8-1 Switching off a node to delete the node during scale-in

3. Click Submit. After the request is submitted, the dedicated resource poolmanagement page is displayed.

Deleting a Dedicated Resource PoolIf a dedicated resource pool is no longer needed during AI service development,you can delete the resource pool to release resources and reduce costs.

● After a dedicated resource pool is deleted, the training jobs, notebook instances, anddeployment that depend on the resource pool are unavailable. A dedicated resourcepool cannot be restored after being deleted. Exercise caution when deleting a dedicatedresource pool.

● A dedicated resource pool in Yearly/Monthly billing mode cannot be deleted.

1. Go to the dedicated resource pool management page, locate the row thatcontains the desired dedicated resource pool, and click Delete in theOperation column.

2. In the dialog box that is displayed, click OK.

ModelArtsUser Guide (AI Beginners) 8 Resource Pools

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 155

Page 161: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

9 Model Templates

9.1 Model Template OverviewBecause the configurations of models of the same function are similar, ModelArtsintegrates the configurations of such models into a common template. By usingthis template, you can easily and quickly import models without compiling theconfig.json configuration file. In simple terms, a template integrates AI engineand model configurations. Each template corresponds to a specific AI engine andinference mode. With the templates, you can quickly import models to ModelArts.

Background

Templates are classified into general and non-general types.

● Non-general templates are customized for specific scenarios with the inputand output mode fixed. For example, the TensorFlow-based imageclassification template uses the built-in image processing mode.

● General templates integrate a specific AI engine and running environmentand use the undefined input and output mode. You need to select an inputand output mode based on the model function or application scenario tooverwrite the undefined mode. For example, an image classification modelrequires the built-in image processing mode, and an object detection modelrequires the built-in object detection mode.

The models imported in undefined mode cannot be deployed as batch services.

Using a Template

The following uses the TensorFlow-based image classification template (Fordetails, see TensorFlow-Based Image Classification Template.) as an example.You need to upload the TensorFlow model package to OBS in advance. Store themodel files in the model directory. When creating a model using this template,you need to select the model directory.

1. On the Import Model page, set Meta Model Source to Template.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 156

Page 162: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

2. In the Template area, select TensorFlow-based image classificationtemplate.ModelArts also provides three filter criteria: Type, Engine, and Environment,helping you quickly find the desired template. If the three filter criteria cannotmeet your requirements, you can enter keywords to search for the targettemplate.

Figure 9-1 Selecting a template

3. For Model Folder, select the model directory where the model files reside.For details, see Template Description.

If a training job is executed for multiple times, different version directories aregenerated, such as V001 and V002, and the generated models are stored in the modelfolder in different version directories. When selecting model files, specify the modelfolder in the corresponding version directory.

Figure 9-2 Configuring the model folder

4. If the default input and output mode of the selected template can beoverwritten, you can select an input and output mode based on the modelfunction or application scenario. Input and Output Mode is an abstract ofthe API in config.json. It describes the interface provided by the model forexternal inference. An input and output mode describes one or more APIs, andcorresponds to a template.For example, for TensorFlow-based image classification template, Inputand Output Mode supports Built-in image processing mode. The input andoutput mode cannot be modified in the template. Therefore, you can onlyview but not modify the default input and output mode of the template onthe page.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 157

Page 163: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

For details about the supported input and output modes, see Input andOutput Modes.

Supported Templates● TensorFlow-Based Image Classification Template● TensorFlow-py27 General Template● TensorFlow-py36 General Template● MXNet-py27 General Template● MXNet-py36 General Template● PyTorch-py27 General Template● PyTorch-py36 General Template● Caffe-CPU-py27 General Template● Caffe-GPU-py27 General Template● Caffe-CPU-py36 General Template● Caffe-GPU-py36 General Template

Supported Input and Output Modes● Built-in Object Detection Mode● Built-in Image Processing Mode● Built-in Predictive Analytics Mode● Undefined Mode

9.2 Template Description

9.2.1 TensorFlow-Based Image Classification Template

IntroductionAI engine: TensorFlow 1.8; Environment: python2.7. It is advised to import aTensorFlow-based image classification model saved in SavedModel format. Thistemplate uses the built-in image processing mode of ModelArts. For details aboutthe image processing mode, see Built-in Image Processing Mode. Ensure thatyour model can process images whose key is images, because you need to inputan image whose key is images to the model for inference. When using thetemplate to import a model, select the model directory containing the model files.

Template InputThe template input is the TensorFlow-based model package stored on OBS. Ensurethat the OBS directory you use and ModelArts are in the same region. For detailsabout model package requirements, see Model Package Example.

Input and Output ModeBuilt-in Image Processing Mode cannot be overwritten. That is, another inputand output mode cannot be selected during model creation.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 158

Page 164: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Model Package Specifications● The model package must be stored in the OBS folder named model. Model

files and the model inference code file are stored in the model folder.● The model inference code file is optional. If the file exists, the file name must

be customize_service.py. Only one inference code file can exist in the modelfolder. For details about how to compile the model inference code file, seeSpecifications for Compiling Model Inference Code.

● The structure of the model package imported using a template is as follows:model/│├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example.├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.

Model Package Example

Structure of the TensorFlow-based model package

When publishing the model, you only need to specify the model directory.

OBS bucket/directory name|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── saved_model.pb (Mandatory) Protocol buffer file, which contains the diagram description of the model ├── variables Mandatory for the main file of the *.pb model. The folder must be named variables and contains the weight deviation of the model. ├── variables.index Mandatory ├── variables.data-00000-of-00001 Mandatory ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

9.2.2 TensorFlow-py27 General Template

Introduction

AI engine: TensorFlow 1.8; Environment: python2.7; Input and output mode:undefined mode. Select an appropriate input and output mode based on themodel function or application scenario. When using the template to import amodel, select the model directory containing the model files.

Template Input

The template input is the TensorFlow-based model package stored on OBS. Ensurethat the OBS directory you use and ModelArts are in the same region. For detailsabout model package requirements, see Model Package Example.

Input and Output Mode

Undefined Mode can be overwritten. That is, another input and output mode canbe selected during model creation.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 159

Page 165: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Model Package Specifications● The model package must be stored in the OBS folder named model. Model

files and the model inference code file are stored in the model folder.● The model inference code file is optional. If the file exists, the file name must

be customize_service.py. Only one inference code file can exist in the modelfolder. For details about how to compile the model inference code file, seeSpecifications for Compiling Model Inference Code.

● The structure of the model package imported using a template is as follows:model/│├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example.├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.

Model Package Example

Structure of the TensorFlow-based model package

When publishing the model, you only need to specify the model directory.

OBS bucket/directory name|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── saved_model.pb (Mandatory) Protocol buffer file, which contains the diagram description of the model ├── variables Mandatory for the main file of the *.pb model. The folder must be named variables and contains the weight deviation of the model. ├── variables.index Mandatory ├── variables.data-00000-of-00001 Mandatory ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

9.2.3 TensorFlow-py36 General Template

Introduction

AI engine: TensorFlow 1.8; Environment: python3.6; Input and output mode:undefined mode. Select an appropriate input and output mode based on themodel function or application scenario. When using the template to import amodel, select the model directory containing the model files.

Template Input

The template input is the TensorFlow-based model package stored on OBS. Ensurethat the OBS directory you use and ModelArts are in the same region. For detailsabout model package requirements, see Model Package Example.

Input and Output Mode

Undefined Mode can be overwritten. That is, another input and output mode canbe selected during model creation.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 160

Page 166: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Model Package Specifications● The model package must be stored in the OBS folder named model. Model

files and the model inference code file are stored in the model folder.● The model inference code file is optional. If the file exists, the file name must

be customize_service.py. Only one inference code file can exist in the modelfolder. For details about how to compile the model inference code file, seeSpecifications for Compiling Model Inference Code.

● The structure of the model package imported using a template is as follows:model/│├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example.├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.

Model Package Example

Structure of the TensorFlow-based model package

When publishing the model, you only need to specify the model directory.

OBS bucket/directory name|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── saved_model.pb (Mandatory) Protocol buffer file, which contains the diagram description of the model ├── variables Mandatory for the main file of the *.pb model. The folder must be named variables and contains the weight deviation of the model. ├── variables.index Mandatory ├── variables.data-00000-of-00001 Mandatory ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

9.2.4 MXNet-py27 General Template

Introduction

AI engine: MXNet1.2.1; Environment: python2.7; Input and output mode:undefined mode. Select an appropriate input and output mode based on themodel function or application scenario. When using the template to import amodel, select the model directory containing the model files.

Template Input

The template input is the MXNet model package stored on OBS. Ensure that theOBS directory you use and ModelArts are in the same region. For details aboutmodel package requirements, see Model Package Example.

Input and Output Mode

Undefined Mode can be overwritten. That is, another input and output mode canbe selected during model creation.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 161

Page 167: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Model Package Specifications● The model package must be stored in the OBS folder named model. Model

files and the model inference code file are stored in the model folder.

● The model inference code file is optional. If the file exists, the file name mustbe customize_service.py. Only one inference code file can exist in the modelfolder. For details about how to compile the model inference code file, seeSpecifications for Compiling Model Inference Code.

● The structure of the model package imported using a template is as follows:model/│├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example.├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.

Model Package Example

Structure of the MXNet-based model package

When publishing the model, you only need to specify the model directory.

OBS bucket/directory name|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── resnet-50-symbol.json (Mandatory) Model definition file, which contains the neural network description of the model ├── resnet-50-0000.params (Mandatory) Model variable parameter file, which contains parameter and weight information ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

9.2.5 MXNet-py36 General Template

Introduction

AI engine: MXNet1.2.1; Environment: python3.6; Input and output mode:undefined mode. Select an appropriate input and output mode based on themodel function or application scenario. When using the template to import amodel, select the model directory containing the model files.

Template Input

The template input is the MXNet-based model package stored on OBS. Ensurethat the OBS directory you use and ModelArts are in the same region. For detailsabout model package requirements, see Model Package Example.

Input and Output Mode

Undefined Mode can be overwritten. That is, another input and output mode canbe selected during model creation.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 162

Page 168: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Model Package Specifications● The model package must be stored in the OBS folder named model. Model

files and the model inference code file are stored in the model folder.

● The model inference code file is optional. If the file exists, the file name mustbe customize_service.py. Only one inference code file can exist in the modelfolder. For details about how to compile the model inference code file, seeSpecifications for Compiling Model Inference Code.

● The structure of the model package imported using a template is as follows:model/│├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example.├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.

Model Package Example

Structure of the MXNet-based model package

When publishing the model, you only need to specify the model directory.

OBS bucket/directory name|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── resnet-50-symbol.json (Mandatory) Model definition file, which contains the neural network description of the model ├── resnet-50-0000.params (Mandatory) Model variable parameter file, which contains parameter and weight information ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

9.2.6 PyTorch-py27 General Template

Introduction

AI engine: PyTorch1.0; Environment: python2.7; Input and output mode: undefinedmode. Select an appropriate input and output mode based on the model functionor application scenario. When using the template to import a model, select themodel directory containing the model files.

Template Input

The template input is the PyTorch-based model package stored on OBS. Ensurethat the OBS directory you use and ModelArts are in the same region. For detailsabout model package requirements, see Model Package Example.

Input and Output Mode

Undefined Mode can be overwritten. That is, another input and output mode canbe selected during model creation.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 163

Page 169: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Model Package Specifications● The model package must be stored in the OBS folder named model. Model

files and the model inference code file are stored in the model folder.● The model inference code file is optional. If the file exists, the file name must

be customize_service.py. Only one inference code file can exist in the modelfolder. For details about how to compile the model inference code file, seeSpecifications for Compiling Model Inference Code.

● The structure of the model package imported using a template is as follows:model/│├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example.├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.

Model Package ExampleStructure of the PyTorch-based model package

When publishing the model, you only need to specify the model directory.

OBS bucket/directory name|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── resnet50.pth (Mandatory) PyTorch model file, which contains variable and weight information ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

9.2.7 PyTorch-py36 General Template

IntroductionAI engine: PyTorch1.0; Environment: python3.6; Input and output mode: undefinedmode. Select an appropriate input and output mode based on the model functionor application scenario. When using the template to import a model, select themodel directory containing the model files.

Template InputThe template input is the PyTorch-based model package stored on OBS. Ensurethat the OBS directory you use and ModelArts are in the same region. For detailsabout model package requirements, see Model Package Example.

Input and Output ModeUndefined Mode can be overwritten. That is, another input and output mode canbe selected during model creation.

Model Package Specifications● The model package must be stored in the OBS folder named model. Model

files and the model inference code file are stored in the model folder.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 164

Page 170: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

● The model inference code file is optional. If the file exists, the file name mustbe customize_service.py. Only one inference code file can exist in the modelfolder. For details about how to compile the model inference code file, seeSpecifications for Compiling Model Inference Code.

● The structure of the model package imported using a template is as follows:model/│├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example.├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.

Model Package ExampleStructure of the PyTorch-based model package

When publishing the model, you only need to specify the model directory.

OBS bucket/directory name|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── resnet50.pth (Mandatory) PyTorch model file, which contains variable and weight information ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

9.2.8 Caffe-CPU-py27 General Template

IntroductionAI engine: CPU-based Caffe 1.0; Environment: python2.7; Input and output mode:undefined mode. Select an appropriate input and output mode based on themodel function or application scenario. When using the template to import amodel, select the model directory containing the model files.

Template InputThe template input is the Caffe-based model package stored on OBS. Ensure thatthe OBS directory you use and ModelArts are in the same region. For details aboutmodel package requirements, see Model Package Example.

Input and Output ModeUndefined Mode can be overwritten. That is, another input and output mode canbe selected during model creation.

Model Package Specifications● The model package must be stored in the OBS folder named model. Model

files and the model inference code file are stored in the model folder.● The model inference code file is optional. If the file exists, the file name must

be customize_service.py. Only one inference code file can exist in the modelfolder. For details about how to compile the model inference code file, seeSpecifications for Compiling Model Inference Code.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 165

Page 171: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

● The structure of the model package imported using a template is as follows:model/│├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example.├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.

Model Package ExampleStructure of the Caffe-based model package

When publishing the model, you only need to specify the model directory.OBS bucket/directory name|── model (Mandatory) The folder must be named model and is used to store model-related files. |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code |── deploy.prototxt (Mandatory) Caffe model file, which contains information such as the model network structure |── resnet.caffemodel (Mandatory) Caffe model file, which contains variable and weight information |── customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

9.2.9 Caffe-GPU-py27 General Template

IntroductionAI engine: GPU-based Caffe 1.0; Environment: python2.7; Input and output mode:undefined mode. Select an appropriate input and output mode based on themodel function or application scenario. When using the template to import amodel, select the model directory containing the model files.

Template InputThe template input is the Caffe-based model package stored on OBS. Ensure thatthe OBS directory you use and ModelArts are in the same region. For details aboutmodel package requirements, see Model Package Example.

Input and Output ModeUndefined Mode can be overwritten. That is, another input and output mode canbe selected during model creation.

Model Package Specifications● The model package must be stored in the OBS folder named model. Model

files and the model inference code file are stored in the model folder.● The model inference code file is optional. If the file exists, the file name must

be customize_service.py. Only one inference code file can exist in the modelfolder. For details about how to compile the model inference code file, seeSpecifications for Compiling Model Inference Code.

● The structure of the model package imported using a template is as follows:model/│

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 166

Page 172: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example.├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.

Model Package ExampleStructure of the Caffe-based model package

When publishing the model, you only need to specify the model directory.OBS bucket/directory name|── model (Mandatory) The folder must be named model and is used to store model-related files. |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code |── deploy.prototxt (Mandatory) Caffe model file, which contains information such as the model network structure |── resnet.caffemodel (Mandatory) Caffe model file, which contains variable and weight information |── customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

9.2.10 Caffe-CPU-py36 General Template

IntroductionAI engine: CPU-based Caffe 1.0; Environment: python3.6; Input and output mode:undefined mode. Select an appropriate input and output mode based on themodel function or application scenario. When using the template to import amodel, select the model directory containing the model files.

Template InputThe template input is the Caffe-based model package stored on OBS. Ensure thatthe OBS directory you use and ModelArts are in the same region. For details aboutmodel package requirements, see Model Package Example.

Input and Output ModeUndefined Mode can be overwritten. That is, another input and output mode canbe selected during model creation.

Model Package Specifications● The model package must be stored in the OBS folder named model. Model

files and the model inference code file are stored in the model folder.● The model inference code file is optional. If the file exists, the file name must

be customize_service.py. Only one inference code file can exist in the modelfolder. For details about how to compile the model inference code file, seeSpecifications for Compiling Model Inference Code.

● The structure of the model package imported using a template is as follows:model/│├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example.├── Custom Python package //(Optional) User's Python package, which can be directly

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 167

Page 173: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

referenced in the model inference code├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.

Model Package Example

Structure of the Caffe-based model package

When publishing the model, you only need to specify the model directory.OBS bucket/directory name|── model (Mandatory) The folder must be named model and is used to store model-related files. |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code |── deploy.prototxt (Mandatory) Caffe model file, which contains information such as the model network structure |── resnet.caffemodel (Mandatory) Caffe model file, which contains variable and weight information |── customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

9.2.11 Caffe-GPU-py36 General Template

Introduction

AI engine: GPU-based Caffe 1.0; Environment: python3.6; Input and output mode:undefined mode. Select an appropriate input and output mode based on themodel function or application scenario. When using the template to import amodel, select the model directory containing the model files.

Template Input

The template input is the Caffe-based model package stored on OBS. Ensure thatthe OBS directory you use and ModelArts are in the same region. For details aboutmodel package requirements, see Model Package Example.

Input and Output Mode

Undefined Mode can be overwritten. That is, another input and output mode canbe selected during model creation.

Model Package Specifications● The model package must be stored in the OBS folder named model. Model

files and the model inference code file are stored in the model folder.● The model inference code file is optional. If the file exists, the file name must

be customize_service.py. Only one inference code file can exist in the modelfolder. For details about how to compile the model inference code file, seeSpecifications for Compiling Model Inference Code.

● The structure of the model package imported using a template is as follows:model/│├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example.├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 168

Page 174: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Model Package ExampleStructure of the Caffe-based model package

When publishing the model, you only need to specify the model directory.OBS bucket/directory name|── model (Mandatory) The folder must be named model and is used to store model-related files. |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code |── deploy.prototxt (Mandatory) Caffe model file, which contains information such as the model network structure |── resnet.caffemodel (Mandatory) Caffe model file, which contains variable and weight information |── customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

9.3 Input and Output Modes

9.3.1 Built-in Object Detection Mode

InputThis is a built-in object detection input and output mode, which is applicable toobject detection models. The models that use this mode are identified as objectdetection models. The prediction request path is /, the request protocol is HTTP,the request method is POST, Content-Type is multipart/form-data, key isimages, and type is file. Before selecting this mode, ensure that your model canprocess the input data whose key is images.

OutputThe inference result is returned in JSON format. For details about the fields, seeTable 9-1.

Table 9-1 Parameter description

Field Type Description

detection_classes String array List of detected objects,for example,["yunbao","cat"]

detection_boxes Float array Coordinates of thebounding box, in theformat of [ , ,

, ]

detection_scores Float array Confidence scores ofdetected objects, whichare used to measure thedetection accuracy

The JSON Schema of the inference result is as follows:

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 169

Page 175: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

{ "type": "object", "properties": { "detection_classes": { "items": { "type": "string" }, "type": "array" }, "detection_boxes": { "items": { "minItems": 4, "items": { "type": "number" }, "type": "array", "maxItems": 4 }, "type": "array" }, "detection_scores": { "items": { "type": "string" }, "type": "array" } }}

Sample RequestIn this mode, input an image to be processed in the inference request. Theinference result is returned in JSON format. The following are examples:● Performing prediction on the console

On the Prediction tab page of the service details page, upload an image andclick Predict to obtain the prediction result.

Figure 9-3 Performing prediction on the console

● Using Postman to call a RESTful API for predictionAfter a model is deployed as a service, you can obtain the API URL on theUsage Guides tab page of the service details page.– On the Headers tab page, set Content-Type to multipart/form-data

and X-Auth-Token to the actual token obtained.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 170

Page 176: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 9-4 Setting the request header

– On the Body tab page, set the request body. Set key to images, selectFile, select the image to be processed, and click send to send yourprediction request.

Figure 9-5 Setting the request body

9.3.2 Built-in Image Processing Mode

InputThis is a built-in image processing input and output mode, which is applicable toimage classification models. The models that use this mode are identified asimage classification models. The prediction request path is /, the request protocolis HTTPS, the request method is POST, Content-Type is multipart/form-data,key is images, and type is file. Before selecting this mode, ensure that your modelcan process the input data whose key is images.

OutputThe inference result is returned in JSON format. The specific fields are determinedby the model.

Sample RequestIn this mode, input an image to be processed in the inference request. Theresponse in JSON format varies according to the model. The following areexamples:

● Performing prediction on the console

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 171

Page 177: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Figure 9-6 Performing prediction on the console

● Using Postman to call a RESTful API for predictionAfter a model is deployed as a service, you can obtain the API URL on theUsage Guides tab page of the service details page. On the Body tab page, setthe request body. Set key to images, select File, select the image to beprocessed, and click send to send your prediction request.

Figure 9-7 Calling a RESTful API

9.3.3 Built-in Predictive Analytics Mode

InputThis is a built-in predictive analytics input and output mode, which is applicable topredictive analytics models. The models that use this mode are identified aspredictive analytics models. The prediction request path is /, the request protocolis HTTP, the request method is POST, and Content-Type is application/json. Therequest body is in JSON format. For details about the JSON fields, see Table 9-2.Before selecting this mode, ensure that your model can process the input data inJSON Schema format.

Table 9-2 JSON field description

Field Type Description

data Data structure Inference data. For details, see Table 9-3.

Table 9-3 Data description

Field Type Description

req_data ReqData array List of inference data

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 172

Page 178: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

ReqData is of the Object type and indicates the inference data. The data structureis determined by the application scenario. For models using this mode, thepreprocessing logic in the custom model inference code should be able to correctlyprocess the data inputted in the format defined by the mode.

The JSON Schema of a prediction request is as follows:

{ "type": "object", "properties": { "data": { "type": "object", "properties": { "req_data": { "items": [{ "type": "object", "properties": {} }], "type": "array" } } } }}

OutputThe inference result is returned in JSON format. For details about the JSON fields,see Table 9-4.

Table 9-4 JSON field description

Field Type Description

data Data structure Inference data. For details, see Table 9-5.

Table 9-5 Data description

Field Type Description

resp_data RespData array List of prediction results.

Similar to ReqData, RespData is also of the Object type and indicates theprediction result. Its structure is determined by the application scenario. Formodels using this mode, the postprocessing logic in the custom model inferencecode should be able to correctly output data in the format defined by the mode.

The JSON Schema of a prediction result is as follows:

{ "type": "object", "properties": { "data": { "type": "object", "properties": { "resp_data": { "type": "array",

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 173

Page 179: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

"items": [{ "type": "object", "properties": {} }] } } } }}

Sample RequestIn this mode, input the data to be predicted in JSON format. The prediction resultis returned in JSON format. The following are examples:

● Performing prediction on the consoleOn the Prediction tab page of the service details page, enter inference codeand click Predict to obtain the prediction result.

Figure 9-8 Prediction result

● Using Postman to call a RESTful API for predictionAfter a model is deployed as a service, you can obtain the API URL on theUsage Guides tab page of the service details page.– On the Headers tab page, set Content-Type to application/json and X-

Auth-Token to the actual token obtained.

Figure 9-9 Setting the request header for prediction

– On the Body tab page, edit the data to be predicted and click send tosend your prediction request.

9.3.4 Undefined Mode

DescriptionThe undefined mode does not define the input and output mode. The input andoutput mode is determined by the model. Select this mode only when the existinginput and output mode is not applicable to the application scenario of the model.The models imported in undefined mode cannot be deployed as batch services. In

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 174

Page 180: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

addition, the service prediction page may not be displayed properly. New modesare coming soon for more application scenarios.

InputNo limit.

OutputNo limit.

Sample RequestThe undefined mode has no specific sample request because the input and outputof the request are entirely determined by the model.

ModelArtsUser Guide (AI Beginners) 9 Model Templates

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 175

Page 181: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

10 Model Package Specifications

10.1 Model Package SpecificationsWhen you import models in Model Management, if the meta model is importedfrom OBS or a container image, the model package must meet the followingspecifications:

● The model package must contain the model folder. The model folder storesthe model file, model configuration file, and model inference code.

● The model configuration file must exist and its name is fixed to config.json.There exists only one model configuration file. For details about how tocompile the model configuration file, see Specifications for Compiling theModel Configuration File.

● The model inference code file is optional. You are advised to use the relativeimport mode (Python import) to import the custom package. If the file namemust be fixed to customize_service.py and there can be only one modelinference code file, follow instructions in Specifications for Compiling ModelInference Code to compile model inference code.

ModelArts provides samples and their sample code for various engines. You canrefer to the samples to compile your configuration files and inference code. Fordetails, see ModelArts Samples.

Model Package Example● Structure of the TensorFlow-based model package

When publishing the model, you only need to specify the ocr directory.OBS bucket/directory name|── ocr| ├── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files| │ ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code| │ ├── saved_model.pb (Mandatory) Protocol buffer file, which contains the diagram description of the model| │ ├── variables Name of a fixed sub-directory, which contains the weight and deviation rate of the model. It is mandatory for the main file of the *.pb model.| │ │ ├── variables.index Mandatory| │ │ ├── variables.data-00000-of-00001 Mandatory| │ ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json.

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 176

Page 182: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Only one model configuration file exists.| │ ├──customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

● Structure of the MXNet-based model packageWhen publishing the model, you only need to specify the resnet directory.OBS bucket/directory name|── resnet| ├── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files| │ ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code| │ ├── resnet-50-symbol.json (Mandatory) Model definition file, which contains the neural network description of the model| │ ├── resnet-50-0000.params (Mandatory) Model variable parameter file, which contains parameter and weight information| │ ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file exists.| │ ├──customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

● Structure of the Image-based model packageWhen publishing the model, you only need to specify the resnet directory.OBS bucket/directory name|── resnet| ├── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files| │ ├──config.json (Mandatory) Model configuration file (the address of the SWR image must be configured). The file name is fixed to config.json. Only one model configuration file exists.

● Structure of the PySpark-based model packageWhen publishing the model, you only need to specify the resnet directory.OBS bucket/directory name|── resnet| ├── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files| │ ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code| │ ├── spark_model (Mandatory) Model folder, which contains the model content saved by PySpark| │ ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file exists.| │ ├──customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

● Structure of the PyTorch-based model packageWhen publishing the model, you only need to specify the resnet directory.OBS bucket/directory name|── resnet| ├── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files| │ ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code| │ ├── resnet50.pth (Mandatory) PyTorch model file, which contains variable and weight information and is saved as state_dict| │ ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file exists.| │ ├──customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

● Structure of the Caffe-based model packageWhen publishing the model, you only need to specify the resnet directory.OBS bucket/directory name|── resnet| |── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files| | |── <<Custom Python package>> (Optional) User's Python package, which can be directly

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 177

Page 183: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

referenced in the model inference code| | |── deploy.prototxt (Mandatory) Caffe model file, which contains information such as the model network structure| | |── resnet.caffemodel (Mandatory) Caffe model file, which contains variable and weight information| | |── config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file exists.| | |── customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

● Structure of the XGBoost-based model packageWhen publishing the model, you only need to specify the resnet directory.OBS bucket/directory name|── resnet| |── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files| | |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code| | |── *.m (Mandatory): Model file whose extension name is .m| | |── config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file exists.| | |── customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

● Structure of the Scikit_Learn-based model packageWhen publishing the model, you only need to specify the resnet directory.OBS bucket/directory name|── resnet| |── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files| | |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code| | |── *.m (Mandatory): Model file whose extension name is .m| | |── config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file exists.| | |── customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.

10.2 Specifications for Compiling the ModelConfiguration File

A model developer needs to compile a configuration file when publishing a model.The model configuration file describes the model usage, computing framework,precision, inference code dependency package, and model API.

Configuration File Parameter DescriptionThe configuration file is in JSON format. Table 10-1 describes the parameters.

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 178

Page 184: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Table 10-1 Parameter description

Parameter

Mandatory

DataType

Description

model_algorithm

Yes String Model algorithm, which is set by the modeldeveloper to help model users understand theusage of the model. The options areimage_classification, object_detection,predict_analysis, and developers' customalgorithms.

model_type

Yes String Model AI engine, which indicates the computingframework used by a model. The options areTensorFlow, MXNet, Spark_MLlib, Caffe,Scikit_Learn, XGBoost, Image, and PyTorch.

runtime No String Model running environment. The value ofruntime is related to model_type. Select yourengine and development environment. Fordetails about the supported runningenvironments, see Table 4-8.

swr_location

No String SWR image address. If you import a customimage model from OBS, swr_location ismandatory. swr_location specifies the address ofthe docker image on SWR, indicating that thedocker image on SWR is used to publish themodel. To import an image model, you areadvised to select Container image.

metrics No Object Model precision information, including theaverage value, recall rate, precision, and accuracy.For details about the metrics object structure,see Table 10-2.

apis Yes apiarray

RESTful API array provided by a model. Fordetails about the API data structure, see Table10-3.● When model_type is set to Image, that is, in

the model scenario of a custom image, APIswith different paths can be declared in apisbased on the request path exposed by theimage.

● When model_type is not Image, only one APIwhose request path is / can be declared inapis because the preconfigured AI engineexposes only one inference API whose requestpath is /.

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 179

Page 185: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter

Mandatory

DataType

Description

dependencies

No dependencyarray

Packages on which the inference code and modeldepend. The model developer must provide thepackage name, installation method, and versionrestrictions. For details about the dependencystructure array, see Table 10-6. Dependencypackages cannot be installed for custom imagemodels.

health No health datastructure

Configuration information of an image healthinterface. This parameter is supported only bycustom images. For details about the health datastructure, see Table 10-8.

Table 10-2 metrics object description

Parameter Mandatory Data Type Description

f1 No Number F1 score. The value is rounded to17 decimal places.

recall No Number Recall rate. The value is roundedto 17 decimal places.

precision No Number Precision. The value is roundedto 17 decimal places.

accuracy No Number Accuracy. The value is roundedto 17 decimal places.

Table 10-3 api array

Parameter Mandatory Data Type Description

protocol Yes String Request protocol

url Yes String Request path. For a customimage model (model_type isImage), set this parameter tothe actual request path exposedin the image. For a non-customimage model (model_type isnot Image), the URL can onlybe /.

method Yes String Request method

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 180

Page 186: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Parameter Mandatory Data Type Description

request Yes Object Request body. For details aboutthe request structure, see Table10-4.

response Yes Object Response body. For details aboutthe response structure, seeTable 10-5.

Table 10-4 request description

Parameter Mandatory Data Type Description

Content-type

Yes String Data is sent based on thespecified content type.

data Yes String The request body is described inJSON schema.

Table 10-5 response description

Parameter Mandatory Data Type Description

Content-type

Yes String Data is sent based on thespecified content type.

data Yes String The response body is describedin JSON schema.

Table 10-6 dependency array

Parameter Mandatory Data Type Description

installer Yes String Installation method. Only pip issupported.

packages Yes package array Dependency package collection.For details about the packagestructure array, see Table 10-7.

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 181

Page 187: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Table 10-7 package array

Parameter Mandatory Type Description

package_name

Yes String Dependency package name.Chinese characters and specialcharacters (&!'"<>=) are notallowed.

package_version

No String Dependency package version. Ifthe dependency package doesnot rely on the version number,leave this field blank. Chinesecharacters and special characters(&!'"<>=) are not allowed.

restraint No String Version restriction. Thisparameter is mandatory onlywhen package_version exists.Possible values are EXACT,ATLEAST, and ATMOST.● EXACT indicates that a

specified version is installed.● ATLEAST indicates that the

version of the installationpackage is not earlier thanthe specified version.

● ATMOST indicates that theversion of the installationpackage is not later than thespecified version.

Table 10-8 health data structure description

Parameter Mandatory Type Description

url Yes String Request URL of the health checkinterface

protocol No String Request protocol of the healthcheck interface. Currently, onlyHTTP is supported.

initial_delay_seconds

No String After an instance is started, ahealth check starts after secondsconfigured ininitial_delay_seconds.

timeout_seconds

No String Health check timeout

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 182

Page 188: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Example of the Object Detection Model Configuration File{ "model_type": "TensorFlow", "model_algorithm": "object_detection", "metrics": { "f1": 0.345294, "accuracy": 0.462963, "precision": 0.338977, "recall": 0.351852 }, "apis": [{ "protocol": "https", "url": "/", "method": "post", "request": { "Content-type": "multipart/form-data", "data": { "type": "object", "properties": { "images": { "type": "file" } } } }, "response": { "Content-type": "multipart/form-data", "data": { "type": "object", "properties": { "detection_classes": { "type": "array", "items": [{ "type": "string" }] }, "detection_boxes": { "type": "array", "items": [{ "type": "array", "minItems": 4, "maxItems": 4, "items": [{ "type": "number" }] }] }, "detection_scores": { "type": "number" } } } } }], "dependencies": [{ "installer": "pip", "packages": [{ "restraint": "ATLEAST", "package_version": "1.15.0", "package_name": "numpy" }, { "restraint": "", "package_version": "", "package_name": "h5py" }, { "restraint": "ATLEAST", "package_version": "1.8.0",

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 183

Page 189: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

"package_name": "tensorflow" }, { "restraint": "ATLEAST", "package_version": "5.2.0", "package_name": "Pillow" } ] }]}

Figure 10-1 Inference example

Example of the image classification model configuration file{ "model_type": "TensorFlow", "model_algorithm": "image_classification", "metrics": { "f1": 0.345294, "accuracy": 0.462963, "precision": 0.338977, "recall": 0.351852 }, "apis": [{ "protocol": "https", "url": "/", "method": "post", "request": { "Content-type": "multipart/form-data", "data": { "type": "object", "properties": { "images": { "type": "file" } } } }, "response": { "Content-type": "multipart/form-data", "data": { "type": "object", "properties": { "predicted_label": { "type": "string" }, "scores": { "type": "array", "items": [{ "type": "array", "minItems": 2, "maxItems": 2,

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 184

Page 190: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

"items": [{ "type": "string" }, { "type": "number" } ] }] } } } } }], "dependencies": [{ "installer": "pip", "packages": [{ "restraint": "ATLEAST", "package_version": "1.15.0", "package_name": "numpy" }, { "restraint": "", "package_version": "", "package_name": "h5py" }, { "restraint": "ATLEAST", "package_version": "1.8.0", "package_name": "tensorflow" }, { "restraint": "ATLEAST", "package_version": "5.2.0", "package_name": "Pillow" } ] }]}

Figure 10-2 Example of inference results

Example of the Predictive Analytics Model Configuration File{ "model_type": "TensorFlow", "model_algorithm": "predict_analysis", "metrics": { "f1": 0.345294, "accuracy": 0.462963, "precision": 0.338977, "recall": 0.351852 },

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 185

Page 191: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

"apis": [{ "protocol": "https", "url": "/", "method": "post", "request": { "Content-type": "application/json", "data": { "type": "object", "properties": { "data": { "type": "object", "properties": { "req_data": { "items": [{ "type": "object", "properties": {} }], "type": "array" } } } } } }, "response": { "Content-type": "multipart/form-data", "data": { "type": "object", "properties": { "data": { "type": "object", "properties": { "resp_data": { "type": "array", "items": [{ "type": "object", "properties": {} }] } } } } } } }], "dependencies": [{ "installer": "pip", "packages": [{ "restraint": "ATLEAST", "package_version": "1.15.0", "package_name": "numpy" }, { "restraint": "", "package_version": "", "package_name": "h5py" }, { "restraint": "ATLEAST", "package_version": "1.8.0", "package_name": "tensorflow" }, { "restraint": "ATLEAST", "package_version": "5.2.0", "package_name": "Pillow" }] }]}

Example of inference results

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 186

Page 192: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Example of the Custom Image Model Configuration File{ "model_algorithm": "image_classification", "model_type": "Image",

"metrics": { "f1": 0.345294, "accuracy": 0.462963, "precision": 0.338977, "recall": 0.351852 }, "apis": [{ "protocol": "https", "url": "/", "method": "post", "request": { "Content-type": "multipart/form-data", "data": { "type": "object", "properties": { "images": { "type": "file" } } } }, "response": { "Content-type": "multipart/form-data", "data": { "type": "object", "required": [ "predicted_label", "scores" ], "properties": { "predicted_label": { "type": "string" }, "scores": { "type": "array", "items": [{ "type": "array", "minItems": 2, "maxItems": 2, "items": [{ "type": "string" }, { "type": "number" } ] }] } }

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 187

Page 193: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

} } }]}

Example of a model configuration file using a custom dependency package

The following example defines the TensorFlow-1.14 dependency environment.

{ // The model algorithm is image classification. "model_algorithm": "image_classification", // The model type is TensorFlow. // The model type does not need to be accurately defined because of custom inference. The environment is determined by the dependency package. "model_type": "TensorFlow", // The running environment of the custom model script is Python 3.6, and Python 2.7 is also supported. "runtime": "python3.6", // The model service provides a form request API. The request type is file and the request key value is images. "apis": [{ "procotol": "https", "url": "/", "method": "post", "request": { "Content-type": "multipart/form-data", "data": { "type": "object", "properties": { "images": { "type": "file" } } } }, "response": { "Content-type": "applicaton/json", "data": { "type": "object", "properties": { "mnist_result": { "type": "array", "item": [{ "type": "string" }] } } } } } ], // Model precision information (for model display only) "metrics": { "f1": 0.124555, "recall": 0.171875, "precision": 0.0023493892851938493, "accuracy": 0.00746268656716417 }, "dependencies": [{ "installer": "pip", "packages": [{ "restraint": "ATLEAST", "package_version": "1.14.0", "package_name": "tensorflow" } ] }] }

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 188

Page 194: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

10.3 Specifications for Compiling Model Inference CodeThis section describes how to compile model inference code in ModelArts. Belowthe compilation instructions, this section provides an example of inference code ofthe TensorFlow engine and an example of custom inference logic in an inferencescript.

Specifications for Compiling Inference Code1. All custom Python code must be inherited from the BaseService class. Table

10-9 lists the import statements of different types of model parent classes.

Table 10-9 Import statements of the BaseService class

ModelType

ParentClass

Import Statement

TensorFlow

TfServingBaseService

from model_service.tfserving_model_service importTfServingBaseService

MXNet MXNetBaseService

from mms.model_service.mxnet_model_service importMXNetBaseService

PyTorch PTServingBaseService

from model_service.pytorch_model_service importPTServingBaseService

Pyspark SparkServingBaseService

from model_service.spark_model_service importSparkServingBaseService

Caffe CaffeBaseService

from model_service.caffe_model_service importCaffeBaseService

XGBoost

XgSklServingBaseService

from model_service.python_model_service importXgSklServingBaseService

Scikit_Learn

XgSklServingBaseService

from model_service.python_model_service importXgSklServingBaseService

2. The following methods can be rewritten:

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 189

Page 195: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Table 10-10 Methods to be rewritten

Method Description

__init__(self,model_name,model_path)

Initialization method. Models and labels are loaded inthis method. This method must be rewritten for modelsbased on PyTorch and Caffe to implement the modelloading logic.

_preprocess(self,data)

Preprocess method, which is called before an inferencerequest and is used to convert the original request dataof an API into the expected input data of a model

_inference(self,data)

Inference request method. You are not advised torewrite the method because once the method isrewritten, the built-in inference process of ModelArtswill be overwritten and the custom inference logic willrun.

_postprocess(self,data)

Postprocess method, which is called after an inferencerequest is completed and is used to convert the modeloutput to the API output

● Generally, you can choose to rewrite the preprocess and postprocess methods toimplement preprocessing of the API input and postprocessing of the inferenceoutput.

● Rewriting the init method of the BaseService inheritance class may cause a modelto run abnormally.

3. Currently, two types of content-type APIs can be used for inputting data:multipart/form-data and application/json– multipart/form-data request

curl -X POST \ <modelarts-inference-endpoint> \ -F [email protected] \ -F [email protected]

The corresponding input data is as follows:[ { "image1":{ "cat.jpg":"<cat..jpg file io>" } }, { "image2":{ "horse.jpg":"<horse.jpg file io>" } }]

– application/json request curl -X POST \ <modelarts-inference-endpoint> \ -d '{ "images":"base64 encode image" }'

The corresponding input data is python dict.

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 190

Page 196: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

{ "images":"base64 encode image"

}

4. The attribute that can be used is the local directory where the model resides.The attribute name is self.model_path. In addition, PySpark-based modelscan use self.spark to obtain the SparkSession object in customize_service.py.

TensorFlow Inference Script ExampleThe following is an example of TensorFlow MnistService.● Inference code

from PIL import Imageimport numpy as npfrom model_service.tfserving_model_service import TfServingBaseService

class mnist_service(TfServingBaseService):

def _preprocess(self, data): preprocessed_data = {}

for k, v in data.items(): for file_name, file_content in v.items(): image1 = Image.open(file_content) image1 = np.array(image1, dtype=np.float32) image1.resize((1, 784)) preprocessed_data[k] = image1

return preprocessed_data

def _postprocess(self, data):

infer_output = {}

for output_name, result in data.items():

infer_output["mnist_result"] = result[0].index(max(result[0]))

return infer_output

● Requestcurl -X POST \ Real-time service address \ -F [email protected]

● Response{"mnist_result": 7}

The preceding code example resizes images imported to the user's form to adaptto the model input shape. The image of the 32×32 size is read from the Pillowlibrary and resized to the 1×784 size to match the model input. In subsequentprocessing, convert the model output into a list for the RESTful API to display.

Inference Script Example of the Custom Inference LogicFirst, define a dependency package in the configuration file. For details, seeExample of a model configuration file using a custom dependency package.Then, use the following code example to implement the loading and inference ofthe model in saved_model format.

# -*- coding: utf-8 -*-import jsonimport osimport threading

import numpy as np

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 191

Page 197: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

import tensorflow as tffrom PIL import Image

from model_service.tfserving_model_service import TfServingBaseServiceimport logging

logger = logging.getLogger(__name__)

class MnistService(TfServingBaseService):

def __init__(self, model_name, model_path): self.model_name = model_name self.model_path = model_path self.model_inputs = {} self.model_outputs = {}

# The label file can be loaded here and used in the post-processing function. # Directories for storing the label.txt file on OBS and in the model package

# with open(os.path.join(self.model_path, 'label.txt')) as f: # self.label = json.load(f)

# Load the model in saved_model format in non-blocking mode to prevent blocking timeout. thread = threading.Thread(target=self.get_tf_sess) thread.start()

def get_tf_sess(self): # Load the model in saved_model format.

# The session will be reused. Do not use the with statement. sess = tf.Session(graph=tf.Graph()) meta_graph_def = tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], self.model_path) signature_defs = meta_graph_def.signature_def

self.sess = sess

signature = []

# only one signature allowed for signature_def in signature_defs: signature.append(signature_def) if len(signature) == 1: model_signature = signature[0] else: logger.warning("signatures more than one, use serving_default signature") model_signature = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY

logger.info("model signature: %s", model_signature)

for signature_name in meta_graph_def.signature_def[model_signature].inputs: tensorinfo = meta_graph_def.signature_def[model_signature].inputs[signature_name] name = tensorinfo.name op = self.sess.graph.get_tensor_by_name(name) self.model_inputs[signature_name] = op

logger.info("model inputs: %s", self.model_inputs)

for signature_name in meta_graph_def.signature_def[model_signature].outputs: tensorinfo = meta_graph_def.signature_def[model_signature].outputs[signature_name] name = tensorinfo.name op = self.sess.graph.get_tensor_by_name(name)

self.model_outputs[signature_name] = op

logger.info("model outputs: %s", self.model_outputs)

def _preprocess(self, data):

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 192

Page 198: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

# Two request modes using HTTPS # 1. The request in form-data file format is as follows: data = {"Request key value":{"File name":<File io>}} # 2. Request in JSON format is as follows: data = json.loads("JSON body transferred by the API") preprocessed_data = {}

for k, v in data.items(): for file_name, file_content in v.items(): image1 = Image.open(file_content) image1 = np.array(image1, dtype=np.float32) image1.resize((1, 28, 28)) preprocessed_data[k] = image1

return preprocessed_data

def _inference(self, data):

feed_dict = {} for k, v in data.items(): if k not in self.model_inputs.keys(): logger.error("input key %s is not in model inputs %s", k, list(self.model_inputs.keys())) raise Exception("input key %s is not in model inputs %s" % (k, list(self.model_inputs.keys()))) feed_dict[self.model_inputs[k]] = v

result = self.sess.run(self.model_outputs, feed_dict=feed_dict) logger.info('predict result : ' + str(result))

return result

def _postprocess(self, data): infer_output = {"mnist_result": []} for output_name, results in data.items():

for result in results: infer_output["mnist_result"].append(np.argmax(result))

return infer_output

def __del__(self): self.sess.close()

ModelArtsUser Guide (AI Beginners) 10 Model Package Specifications

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 193

Page 199: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

11 Permissions Management

11.1 Creating a User and Granting PermissionsThis section describes how to use IAM to implement fine-grained permissionscontrol for your ModelArts resources. With IAM, you can:

● Create IAM users for employees based on the organizational structure of yourenterprise. Each IAM user has their own security credentials, providing accessto ModelArts resources.

● Grant only the permissions required for users to perform a task.● Entrust a HUAWEI CLOUD account or cloud service to perform professional

and efficient O&M on your ModelArts resources.

If your HUAWEI CLOUD account does not require individual IAM users, you canskip this section.

This section describes the procedure for granting permissions (see Figure 11-1).

Prerequisites● ModelArts User is a fine-grained policy that can be used only if fine-grained

access control is enabled in IAM. For more information, see Applying forPolicy-based Access Control.

● Learn about the permissions (see ModelArts System Permissions) supportedby ModelArts and choose policies or roles according to your requirements. Forthe system policies of other services, see Permissions Policies.

ModelArtsUser Guide (AI Beginners) 11 Permissions Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 194

Page 200: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

Process Flow

Figure 11-1 Process for granting ModelArts permissions

1. Create a user group and assign permissions to it.Create a user group on the IAM console, and assign the ModelArtsCommonOperations policy to the group.

2. Create a user.Create a user on the IAM console and add the user to the group created in 1.

3. Log in and verify permissions.Log in to the ModelArts console by using the newly created user, and verifythat the user only has read permissions for ModelArts.– Choose Service List > ModelArts. On the ModelArts management

console, choose Dedicated Resource Pools > Create. If the creation fails(assume that the current permission contains only ModelArtsCommonOperations), the ModelArts CommonOperations policy hasalready taken effect.

– Choose any other service in Service List. If a message appears indicatingthat you have insufficient permissions to access the service, theModelArts CommonOperations policy has already taken effect.

11.2 Creating a Custom PolicyCustom policies can be created as a supplement to the system policies ofModelArts. For the actions that can be added for custom policies, see ModelArtsAPI Reference > Permissions Policies and Supported Actions.

You can create custom policies in either of the following ways:

● Visual editor: Select cloud services, actions, resources, and request conditions.This does not require knowledge of policy syntax.

ModelArtsUser Guide (AI Beginners) 11 Permissions Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 195

Page 201: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

● JSON: Edit JSON policies from scratch or based on an existing policy.

For details, see Creating a Custom Policy. The following section containsexamples of common ModelArts custom policies.

Example Custom Policies● Example 1: Denying ExeML project deletion

A deny policy must be used in conjunction with other policies to take effect. Ifthe permissions assigned to a user contain both Allow and Deny actions, theDeny actions take precedence over the Allow actions.The following method can be used if you need to assign permissions of theModelArts FullAccess policy to a user but also forbid the user from deletingExeML projects. Create a custom policy for denying ExeML project deletion,and assign both policies to the group the user belongs to. Then the user canperform all operations on ModelArts except deleting ExeML projects. Thefollowing is an example deny policy:{ "Version": "1.1", "Statement": [ { "Effect": "Deny", "Action": [ "modelarts:exemlProject:delete" ] } ] }

● Example 2: Allowing users to use only development environmentsWhen configuring the permission to use the ModelArts developmentenvironment for a user, you must configure the minimum permissions of OBS,including the permissions of OBS buckets and OBS objects, because thepermission depends on the authorization of OBS. The following is a policyconfiguration example for this user:{ "Version": "1.1", "Statement": [ { "Effect": "Allow", "Action": [ "obs:bucket:ListAllMyBuckets", "obs:bucket:CreateBucket", "obs:bucket:ListBucket" , "obs:bucket:ListBucketVersions" , "obs:bucket:HeadBucket" , "obs:bucket:PutBucketAcl" , "obs:object:PutObject" , "obs:object:GetObject" , "obs:object:GetObjectVersion" , "obs:object:GetObjectVersionAcl" ] }, { "Effect": "Allow", "Action": [ "modelarts:notebook:list", "modelarts:notebook:create" , "modelarts:notebook:get" , "modelarts:notebook:update" , "modelarts:notebook:delete" , "modelarts:notebook:action" ,

ModelArtsUser Guide (AI Beginners) 11 Permissions Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 196

Page 202: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

"modelarts:notebook:access" ] } ] }

ModelArtsUser Guide (AI Beginners) 11 Permissions Management

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 197

Page 203: User Guide (AI Beginners) - HUAWEI CLOUD...Table 2-1 Parameter description Parameter Description Name Enter the name of the dataset. A dataset name can contain only letters, digits,

A Change History

Release Date What's New

2019-05-31 This is the first official release.

ModelArtsUser Guide (AI Beginners) A Change History

Issue 01 (2020-02-25) Copyright © Huawei Technologies Co., Ltd. 198