c2090-303.examcollection.premium.exam · exam name: ibm infosphere datastage v9.1. certkiller ......

33
C2090-303.Examcollection.Premium.Exam.97q Number : C2090-303 Passing Score : 800 Time Limit : 120 min File Version : 36.8 http://www.gratisexam.com/ Exam Code: C2090-303 Exam Name: IBM InfoSphere DataStage v9.1

Upload: tranquynh

Post on 28-Aug-2018

222 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

C2090-303.Examcollection.Premium.Exam.97q

Number: C2090-303Passing Score: 800Time Limit: 120 minFile Version: 36.8

http://www.gratisexam.com/

Exam Code: C2090-303

Exam Name: IBM InfoSphere DataStage v9.1

Page 2: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

Certkiller

QUESTION 1In your ETL application design you have found several areas of common processing requirements in themapping specifications. These common logic areas found include: code validation lookups and nameformatting. The common logic areas have the same logic, but the jobs using them would have different columnmetadata. Choose the action that gives you the best reusability design to effectively implement these commonlogic areas in your ETL application?

A. Create parallel routines for each of the common logic areas and for each of the unique column metadataformats.

B. Create separate jobs for each layout and choose the appropriate job to run within a job sequencer.C. Create parallel shared containers and define columns combining all data formats.D. Create parallel shared containers with Runtime Column Propagation (RCP) ON and define only necessary

common columns needed for thelogic.

Correct Answer: DSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 2When optimizing a job, Balanced Optimization will NOT search the job for what pattern?

A. LinksB. StagesC. SequencersD. Property Settings

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 3You are asked to optimize the fork join job design in the exhibit. This job uses the sort aggregator and a leftouter join on the ZIP code column. Currently all partitioning is set to "Auto" and automatic sort insertion isallowed.

Which change will reduce the cost of partitioning that occurs in this job?

A. Use Entire partitioning on the input links to the Aggregator and Join stages.B. Hash partition and sort on ZIP code column on the input links to the Aggregator and Join stages.

Page 3: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

C. Hash partition and sort on ZIP code column prior to the Copy stage and use entire partitioning on theAggregator and Join stages.

D. Hash partition and sort on ZIP code column prior to the Copy stage, and use same partitioning on theAggregator and Join stages.

Correct Answer: DSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 4What two binding types are supported by Information Services Director (ISD) for a parallel job that is designedto be used as a service? (Choose two.)

A. EJBB. SQLC. HDFSD. SOAPE. STREAMS

Correct Answer: ADSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 5A job design consists of an input Row Generator stage, a Filter stage, followed by a Transformer stage and anoutput Sequential File stage. The job is run on an SMP machine with a configuration file defined with threenodes. The $APT_DISABLE_COMBINATION variable is set to True. How many player processes will this jobgenerate?

A. 8B. 10C. 12D. 16

Correct Answer: ASection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 6Which partitioning method requires a key?

Page 4: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

http://www.gratisexam.com/

A. SameB. ModulusC. Sort MergeD. Round Robin

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 7A job design consists of an input Row Generator stage, a Sort stage, followed by a Transformer stage and anoutput Data Set stage. The job is run on an SMP machine with a configuration file defined with four nodes. The$APT_DISABLE_COMBINATION variable is set to True. How many player processes will this job generate?

A. 7B. 16C. 13D. 16

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 8In the exhibit, a Funnel stage has two input links. Input 1 (Seq_File) comes from a Sequential File stage with"Readers per Node" set to "2". Input 2 (Dataset) comes from a dataset created with 3 partitions. In the Funnelstage, the funnel type is set to "Sequence".

Page 5: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

The parallel configuration file contains 4 nodes. How many instances of the Funnel stage run in parallel?

A. 1B. 2C. 4D. 6

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 9Your job sequence must be restartable. It runs Job1, Job2, and Job3 serially. It has been compiled with "Addcheckpoints so sequence is restartable". Job1 must execute every run even after a failure. Which twoproperties must be selected to ensure that Job1 is run each time, even after a failure? (Choose two.)

A. Set the Job1 Activity stage to "Do not checkpoint run".B. Set trigger on the Job1 Activity stage to "Unconditional".C. In the Job1 Activity stage set the Execution action to "Run".D. In the Job1 Activity stage set the Execution action to "Reset if required, then run".E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger expression type to

"Unconditional".

Correct Answer: ADSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 10What is the result of running the following command: dsjob -report DSProject ProcData

A. Generates a report about the ProcData job, including information about its stages and links.B. Returns a report of the last run of the ProcData job in a DataStage project named DSProject.

Page 6: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

C. Runs the DataStage job named ProcData and returns performance information, including the number ofrows processed.

D. Runs the DataStage job named ProcData and returns job status information, including whether the jobaborted or ran without warnings.

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 11You would like to pass values into parameters that will be used in a variety of downstream activity stages withina job sequence. What are two valid ways to do this? (Choose two.)

A. Use local parameters.B. Place a parameter set stage on the job sequence.C. Add a Transformer stage variable to the job sequence canvas.D. Check the "Propagate Parameters" checkbox in the Sequence Job properties.E. Use the UserVariablesActivity stage to populate the local parameters from an outside source such as a file.

Correct Answer: AESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 12On the DataStage development server, you have been making enhancements to a copy of a DataStage jobrunning on the production server. You have been asked to document the changes you have made to the job.What tool in DataStage Designer would you use?

A. Compare AgainstB. diffapicmdline.exeC. DSMakeJobReportD. Cross Project Compare

Correct Answer: DSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 13Your customer is using Source Code Control Integration for Information server and have tagged artifacts forversion 1. You must create a deployment package from the version 1. Before you create the package you willhave to ensure the project is up to date with version 1. What two things must you do to update the meta-datarepository with the artifacts tagged as version 1? (Choose two.)

A. Right-click the asset and click the Deploy command.B. Right-click the asset and click the Team Import command.C. Right-click the asset and click Update From Source Control Workspace.

Page 7: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

D. Right-click the asset and click Replace From Source Control Workspace.E. Right-click the asset and click the Team command to update the Source Control Workspace with the asset.

Correct Answer: DESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 14What two features distinguish the Operations Console from the Director job log? (Choose two.)

A. Jobs can be started and stopped in Director, but not in the Operations Console.B. The Operations Console can monitor jobs running on only one DataStage engine.C. Workload management is supported within Director, but not in the Operations Console.D. The Operations Console can monitor jobs running on more than one DataStage engine.E. The Operations Console can run on systems where the DataStage clients are not installed.

Correct Answer: DESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 15The Score is divided into which two sections? (Choose two.)

A. StagesB. File setsC. SchemasD. Data setsE. Operators

Correct Answer: DESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 16Which two environment variables add additional reporting information in the job log for DataStage jobs?(Choose two.)

A. $APT_IO_MAPB. $OSH_EXPLAINC. $APT_STARTUP_STATUSD. $APT_EXPORT_FLUSH_COUNTE. $APT_PM_STARTUP_CONCURRENCY

Correct Answer: BCSection: (none)Explanation

Page 8: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

Explanation/Reference:Explanation:

QUESTION 17A job validates account numbers with a reference file using a Join stage, which is hash partitioned by accountnumber. Runtime monitoring reveals that some partitions process many more rows than others. Assumingadequate hardware resources, which action can be used to improve the performance of the job?

A. Replace the Join with a Merge stage.B. Change the number of nodes in the configuration file.C. Add a Sort stage in front of the Join stage. Sort by account number.D. Use Round Robin partitioning on the stream and Entire partitioning on the reference.

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 18Which option is required to identify a particular job player processes?Which option is required to identify aparticular job? player processes?

A. Set $APT_DUMP_SCORE to true.B. Set $APT_PM_SHOW_PIDS to true.C. Log onto the server and issue the command "ps -ef | grep ds".D. Use the DataStage Director Job administration screen to display active player processes.

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 19Which two parallel job stages allow you to use partial schemas? (Choose two.)

A. Peek stageB. File Set stageC. Data Set stageD. Column Export stageE. External Target stage

Correct Answer: BESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 20When using Runtime Column Propagation, which two stages require a schema file? (Choose two.)

Page 9: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

A. Peek stageB. Pivot stageC. Column Import stageD. DB2 Connector stageE. Sequential File stage

Correct Answer: CESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 21What are the two Transfer Protocol Transfer Mode property options for the FTP Enterprise stage? (Choosetwo.)

A. FTPB. EFTPC. TFTPD. SFTPE. RFTP

Correct Answer: ADSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 22Identify the two statements that are true about the functionality of the XML Pack 3.0. (Choose two.)

A. XML Stages are Plug-in stages.B. XML Stage can be found in the Database folder on the palette.C. Uses a unique custom GUI interface called the Assembly Editor.D. It includes the XML Input, XML Output, and XML Transformer stages.E. A single XML Stage, which can be used as a source, target, or transformation.

Correct Answer: CESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 23Identify the two delimiter areas available to be configured in the Sequential File format tab properties? (Choosetwo.)

A. File delimiterB. Null delimiterC. Final delimiterD. Field delimiter

Page 10: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

E. End of group delimiter

Correct Answer: CDSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 24When using a Sequential File stage as a source what are the two reject mode property options? (Choose two.)

A. SetB. FailC. SaveD. ConvertE. Continue

Correct Answer: BESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 25Which two statements are true about Data Sets? (Choose two.)

A. Data sets contain ASCII data.B. Data Sets preserve partitioning.C. Data Sets require repartitioning.D. Data Sets represent persistent data.E. Data Sets require import/export conversions.

Correct Answer: BDSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 26What is the correct method to process a file containing multiple record types using a Complex Flat File stage?

A. Flatten the record types into a single record type.B. Manually break the file into multiple files by record type.C. Define record definitions on the Constraints tab of the Complex Flat File stage.D. Load a table definition for each record type on the Records tab of the Complex Flat File stage.

Correct Answer: DSection: (none)Explanation

Explanation/Reference:Explanation:

Page 11: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

QUESTION 27Which two file stages allow you to configure rejecting data to a reject link? (Choose two.)

A. Data Set StageB. Compare StageC. Big Data File StageD. Lookup File Set StageE. Complex Flat File Stage

Correct Answer: CESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 28Identify two items that are created as a result of running a Balanced Optimization on a job that accesses aHadoop distributed file system as a source? (Choose two.)

A. A JAQL stage is found in the optimized job result.B. A Big Data File stage is found in the optimized job results.C. A Balanced Optimization parameter set is found in the projectD. A Balanced Optimization Shared Container is found in the project.E. A MapReduce Transformer stage is found in the optimized job result.

Correct Answer: ACSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 29A customer must compare a date column with a job parameter date to determine which output links the rowbelongs on. What stage should be used for this requirement?

A. Filter stageB. Switch stageC. Compare stageD. Transformer stage

Correct Answer: DSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 30Rows of data going into a Transformer stage are sorted and hash partitioned by the Input.Product column.Using stage variables, how can you determine when a new row is the first of a new group of Product rows?

A. Create a stage variable named sv_IsNewProduct and follow it by a second stage variable named

Page 12: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

sv_Product. Map the Input.Product column tosv_Product. The derivation for sv_IsNewProduct is: IFInput.Product = sv_Product THEN "YES" ELSE "NO".

B. Create a stage variable named sv_IsNewProduct and follow it by a second stage variable namedsv_Product. Map the Input.Product column tosv_Product. The derivation for sv_IsNewProduct is: IFInput.Product <> sv_Product THEN "YES" ELSE "NO".

C. Create a stage variable named sv_Product and follow it by a second stage variable namedsv_IsNewProduct. Map the Input.Product column tosv_Product. The derivation for sv_IsNewProductis: IF Input.Product = sv_Product THEN "YES" ELSE "NO".

D. Create a stage variable named sv_Product and follow it by a second stage variable namedsv_IsNewProduct. Map the Input.Product column tosv_Product. The derivation for sv_IsNewProductis: IF Input.Product <> sv_Product THEN "YES" ELSE "NO".

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 31Which statement describes what happens when Runtime Column Propagation is disabled for a parallel job?

A. An input column value flows into a target column only if it matches it by name.B. An input column value flows into a target column only if it is explicitly mapped to it.C. You must set APT_AUTO_MAP project environment to true to allow output link mapping to occur.D. An input column value flows into a target column based on its position in the input row. For example, first

column in the input row goes into thefirst target column.

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 32Which statement is true when using the SaveInputRecord() function in a Transformer stage.

A. You can only use the SaveInputRecord() function in Loop variable derivations.B. You can access the saved queue records using Vector referencing in Stage variable derivations.C. You must retrieve all saved queue records using the GetSavedInputRecord() function within Loop variable

derivations.D. You must retrieve all saved queue records using the GetSavedInputRecord() function within Stage variable

derivations.

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 33Which derivations are executed first in the Transformer stage?

A. Input column derivations

Page 13: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

B. Loop variable derivationsC. Stage variable derivationsD. Output column derivations

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 34In a Transformer, which two mappings can be handled by default type conversions. (Choose two.)

A. Integer input column mapped to raw output column.B. Date input column mapped to a string output column.C. String input column mapped to a date output column.D. String input column mapped to integer output column.E. Integer input column mapped to string output column.

Correct Answer: DESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 35Identify two different types of custom stages you can create to extend the Parallel job syntax? (Choose two.)

A. Input stageB. Basic stageC. Group stageD. Custom stageE. Wrapped stage

Correct Answer: DESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 36Which two statements are true about stage variables in a Transformer Stage? (Choose two.)

A. Stage variables can be set to NULL.B. Stage variables cannot be set to NULL.C. Varchar stage variables can be initialized with spaces.D. Stage variables are refreshed with default values before each new input row is processed.E. A stage variable in one Transformer can refer to a stage variable in another Transformer, as long as the

second Transformer was processedearlier in the job flow.

Correct Answer: AC

Page 14: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

Section: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 37What is the purpose of the APT_DUMP_SCORE environment variable?

A. There is no such environment variable.B. It is an environment variable that turns on the job monitor.C. It is an environment variable that enables the collection of runtime performance statistics.D. It is a reporting environment variable that adds additional runtime information in the job log.

Correct Answer: DSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 38Which two data repositories can be used for user authentication within the Information Server Suite? (Choosetwo.)

A. IIS Web ConsoleB. IBM Metadata repositoryC. Standalone LDAP registryD. Operations Console databaseE. IBM Information Server user directory

Correct Answer: CESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 39Which two statements are true about the use of named node pools? (Choose two.)

A. Grid environments must have named node pools for data processing.B. Named node pools can allow separation of buffering from sorting disks.C. When named node pools are used, DataStage uses named pipes between stages.D. Named node pools limit the total number of partitions that can be specified in the configuration file.E. Named node pools constraints will limit stages to be executed only on the nodes defined in the node pools.

Correct Answer: BESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 40

Page 15: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

Which step is required to change from a normal lookup to a sparse lookup in an ODBC Connector stage?

A. Change the partitioning to hash.B. Sort the data on the reference link.C. Change the lookup option in the stage properties to "Sparse".D. Replace columns at the beginning of a SELECT statement with a wildcard asterisk (*).

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 41Which two pieces of information are required to be specified for the input link on a Netezza Connector stage?(Choose two.)

A. PartitioningB. Server nameC. Table definitionsD. Buffering settingsE. Error log directory

Correct Answer: ADSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 42Which requirement must be met to read from a database in parallel using the ODBC connector?

A. ODBC connector always reads in parallel.B. Set the Enable partitioning property to Yes.C. Configure environment variable $APT_PARTITION_COUNT.D. Configure environment variable $APT_MAX_TRANSPORT_BLOCK_SIZE.

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 43Which two statements about the Additional Connections Options property in the Teradata Connector stage tospecify details about the number of connections to Teradata are true? (Choose two.)

A. The default for requestedsessions is the minimum number of available sessions.B. The default for requestedsessions is the maximum number of available sessions.C. Requestedsessions is a number between 1 and the number of vprocs in the operating system.D. Sessionsperplayer determines the number of connections each player in the job has to Teradata.E. Total requested sessions equals sessions per player multiplied by number of nodes multiplied by players

Page 16: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

per node. The default value is 4.

Correct Answer: BDSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 44Configuring the weighting column of an Aggregator stage affects which two options. (Choose two.)

A. SumB. Maximum ValueC. Average of WeightsD. Coefficient of VariationE. Uncorrected Sum of Squares

Correct Answer: AESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 45The parallel framework was extended for real-time applications. Identify two of these aspects. (Choose two.)

A. XML stage.B. End-of-wave.C. Real-time stage types that re-run jobs.D. Real-time stage types that keep jobs always up and running.E. Support for transactions within source database connector stages.

Correct Answer: BDSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 46How must the input data set be organized for input into the Join stage? (Choose two.)

A. UnsortedB. Key partitionedC. Hash partitionedD. Entire partitionedE. Sorted by Join key

Correct Answer: BESection: (none)Explanation

Explanation/Reference:

Page 17: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

Explanation:

QUESTION 47The Change Apply stage produces a change Data Set with a new column representing the code for the type ofchange. What are two change values identified by these code values? (Choose two.)

A. EditB. FinalC. CopyD. DeletedE. Remove Duplicates

Correct Answer: CDSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 48What stage allows for more than one reject link?

A. Join stageB. Merge stageC. Lookup stageD. Funnel stage

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 49A 100MB input dataset has even distribution across 400 unique key values. When you run with a 4- nodeconfiguration file, which two changes could improve sort performance in this scenario? (Choose two.)

A. Convert all numeric fields to varchars.B. Set $APT_TSORT_STRESS_BLOCKSIZE to 50MB.C. Alter the Sort stage properties to perform a stable sort.D. Specify Round Robin partitioning on the input link to the Sort stage.E. Specify "Restrict Memory Usage" to 60MB on the Sort stage properties.

Correct Answer: BESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 50Which statement is correct about the Data Rules stage?

A. The Data Rules stage works with rule definitions only; not executable rules.

Page 18: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

B. As a best practice, you should create and publish new rules from the Data Rules stage.C. If you have Rule Creator role in InfoSphere Information Analyzer, you can create and publish rule definitions

and rule set definitions directlyfrom the stage itself.D. When a job that uses the Data Rules stage runs, the output of the stage is passed to the downstream

stages and results are stored in theAnalysis Results database (IADB).

Correct Answer: ASection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 51Which job design technique can be used to give unique names to sequential output files that are used in multi-instance jobs?

A. Use parameters to identify file names.B. Generate unique file names by using a macro.C. Use DSJoblnvocationID to generate a unique filename.D. Use a Transformer stage variable to generate the name.

Correct Answer: ASection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 52The ODBC stage can handle which two SQL Server data types? (Choose two.)

A. DateB. TimeC. GUIDD. DatetimeE. SmallDateTime

Correct Answer: CESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 53Which DB2 to InfoSphere DataStage data type conversion is correct when reading data with the DB2Connector stage?

A. XML to SQL_WVARCHARB. BIGINT to SQL_BIGINT (INT32)C. VARCHAR, 32768 to SQL_VARCHARD. CHAR FOR BIT DATA to SQL_VARBINARY

Page 19: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

Correct Answer: ASection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 54Which Oracle data type conversion is correct?

A. Oracle data type RAW converts to RAW in Oracle Connector stage.B. Oracle data type NUMBER(6,0) converts to INT32 in Oracle Connector stage.C. Oracle data type NUMBER(15,0) converts to INT32 in Oracle Connector stage.D. Oracle data type NUMBER converts to DECIMAL(38,0) in Oracle Connector stage.

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 55Which two statements about using a Load write method in an Oracle Connector stage to tables that haveindexes on them are true? (Choose two.)

A. Set the Upsert mode property to "Index".B. Set the Index mode property to "Bypass".C. The Load Write method uses the Parallel Direct Path load method.D. The Load Write method uses "Rebuild" mode with no logging automatically.E. Set the environment variable APT_ORACLE_LOAD_OPTIONS to "OPTIONS (DIRECT=TRUE,

PARALLEL=FALSE)".

Correct Answer: CESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 56Which Oracle Connector stage property can be set to tune job performance?

A. Array sizeB. Memory sizeC. Partition sizeD. Transaction size

Correct Answer: ASection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 57

Page 20: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

In a Transformer expression for a stage variable, there is a nullable input column. Assume the legacy NULLprocessing option is turned off. What happens when a row is processed that contains NULL in that inputcolumn?

A. The job aborts.B. The row is rejected.C. NULL is written to the stage variable.D. The value written to the stage variable is undetermined.

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 58Identify two different types of custom stages you can create to extend the Parallel job syntax? (Choose two.)

A. Input stageB. Basic stageC. Group stageD. Custom stageE. Wrapped stage

Correct Answer: DESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 59When using the loop functionality in a transformer, which statement is true regarding Transformer processing.

A. Stage variables can be referenced in loop conditions.B. Stage variables can be executed after loop variable expressions.C. Loop variable expressions are executed before input link column expressions.D. Output links can be excluded from being associated with a True loop condition.

Correct Answer: ASection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 60In your job you are using a Sequential File stage as a source. You have decided to use a schema file as analternative way of specifying column definitions and record format properties. What two items are true inconfiguring this stage in your parallel job? (Choose two.)

A. Enable Run Time Column propagation.B. Enable Run Time Schema propagation.C. Configure the column tab with one column named 'Input'.

Page 21: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

D. Configure the Schema File property with the path of the schema file definition.E. Configure the APT_SEQFILE_HASSCHEMA property with the schema file path.

Correct Answer: ADSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 61Which stage classifies data rows from a single input into groups and computes totals?

A. Modify stageB. Compare stageC. Aggregator stageD. Transformer stage

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 62Which statement describes a SCD Type One update in the Slowly Changing Dimension stage?

A. Adds a new row to the fact table.B. Adds a new row to a dimension table.C. Overwrites an attribute in the fact table.D. Overwrites an attribute in a dimension table.

Correct Answer: DSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 63Which derivations are executed last in the Transformer stage?

A. Input column derivationsB. Loop variable derivationsC. Output column derivationsD. Stage variable derivations

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

Page 22: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

QUESTION 64You are processing groups of rows in a Transformer. The first row in each group contains "1" in the Flagcolumn and "0" in the remaining rows of the group. At the end of each group you want to sum and output theQTY column values. Which technique will enable you to retrieve the sum of the last group?

A. Output the sum that you generated each time you process a row for which the LastRow() function returnsTrue.

B. Output the sum that you generated up to the previous row each time you process a row with a "1" in theFlag column.

C. Output a running total for each group for each row. Follow the Transformer stage by an Aggregator stage.Take the MAX of the QTY columnfor each group.

D. Output the sum that you generated up to the previous row each time you process a row with a "1" in theFlag column. Use the FirstRow()function to determine when the group is done.

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 65The derivation for a stage variable is: Upcase(input_column1) : ' ' : Upcase(input_column2). Suppose thatinput_column1 contains a NULL value. Assume the legacy NULL processing option is turned off.Which behavior is expected?

A. The job aborts.B. NULL is written to the target stage variable.C. The input row is either dropped or rejected depending on whether the Transformer has a reject link.D. The target stage variable is populated with spaces or zeros depending on the stage variable data type.

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 66Which statement is true about table definitions created in DataStage Designer?

A. By default, table definitions created in DataStage Designer are visible to other Information Server products.B. Table definitions created in DataStage Designer are local to DataStage and cannot be shared with other

Information Server products.C. When a table definition is created in one DataStage project, it is automatically available in other DataStage

projects, but not outside ofDataStage.D. Table definitions created in DataStage Designer are not by default available to other Information Server

products, but they can be shared withother Information Server products.

Correct Answer: DSection: (none)Explanation

Explanation/Reference:Explanation:

Page 23: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

QUESTION 67What are two advantages of using Runtime Column Propagation (RCP)? (Choose two.)

A. RCP forces a developer to define all columns explicitly.B. Only columns used in the data flow need to be defined.C. Sequential files don't require schema files when using RCP.D. Only columns that are defined as VarChar need RCP enabled.E. Columns not specifically used in the flow are propagated as if they were.

Correct Answer: BESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 68In a file defining a partial schema, which two properties define the record? (Choose two.)

A. intactB. delimC. positionD. length_of_recordE. record_delim_string

Correct Answer: AESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 69What role must a user have to delete shared metadata assets from the repository?

A. DataStage AdministratorB. Business Glossary AdministratorC. Common Metadata AdministratorD. Information Analyzer Administrator

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 70Your job is using the Sequential File stage as a source. Identify two stage property configurations that will allowyou to accomplish parallel reads? (Choose two.)

A. Read Method property set to 'File Pattern' and the environment variable$APT_IMPORT_PATTERN_USES_FILESET is set to 'FALSE'.

B. Read Method property set to 'File Pattern' and the 'Readers per Node' property value set to greater than 1.C. Read Method property is set to 'Parallel' with the Key property set to a Input column that has the file names.

Page 24: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

D. Read Method property set to 'Specific File(s)' with only one file specified and the 'Readers Per Node'property value set to greater than 1.

E. Read Method property set to 'Specific File(s)' with more than one file specified and each file specified hasthe same format/column properties.

Correct Answer: DESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 71Identify the two statements that are true about the functionality of the XML Pack 3.0. (Choose two.)

A. XML Stages are Plug-in stages.B. XML Stage can be found in the Database folder on the palette.C. Uses a unique custom GUI interface called the Assembly Editor.D. It includes the XML Input, XML Output, and XML Transformer stages.E. A single XML Stage, which can be used as a source, target, or transformation.

Correct Answer: CESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 72When using the Sequential File stage as a source, what two property options allow you to add extra columnsabout the file(s) you are reading onto the output link? (Choose two.)

A. File Name ColumnB. Percent Null ColumnC. Row number ColumnD. Surrogate Key ColumnE. Number of Records Column

Correct Answer: ACSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 73The parallel framework supports standard and complex data types in the SQL type column tab property. Identifythe two complex data types? (Choose two.)

A. VectorB. VarCharC. Linked listD. SubrecordE. Binary trees

Page 25: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

Correct Answer: ADSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 74Identify two areas that DataStage can integrate with a Hadoop environment. (Choose two.)

A. Use the Pig Activity stage in a sequencer to invoke a Pig MapReduce program.B. Use the Hive Activity stage in a sequences to invoke a Hive MapReduce program.C. Use the Big Data File stage to access files on the Hadoop Distributed File System.D. Use the Oozie Workflow Activity stage in a sequencer job to invoke Oozie work flows.E. Use the MapReduce Transformer stage to convert Hadoop Distributed File System data.

Correct Answer: CDSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 75The number of File Set data files created depends upon what two items? (Choose two.)

A. Amount of memory.B. Schema definition of the file.C. Operating system limitations.D. Number of logical processing nodes.E. Number of disks in the export or default disk pool connected to each processing node in the default node

pool.

Correct Answer: DESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 76What are the two Transfer Protocol Transfer Mode property options for the FTP Enterprise stage? (Choosetwo.)

A. FTPB. EFTPC. TFTPD. SFTPE. RFTP

Correct Answer: ADSection: (none)Explanation

Explanation/Reference:

Page 26: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

Explanation:

QUESTION 77Identify two restructure stages that allow you to create or organize vectors in the output link results? (Choosetwo.)

A. Split VectorB. Column ImportC. Merge recordsD. Split SubrecordE. Make Subrecord

Correct Answer: DESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 78You are using the Change Capture stage in your job design to identify changes made to the input link (beforeimage) to obtain the results in the output link. This job will be using a multi-node configuration file whenexecuted. What are two requirements on the input link data for the Change Capture stage? (Choose two.)

A. sortedB. merged linksC. parallel modeD. key partitionedE. sequential mode

Correct Answer: ADSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 79What are two statistics or attributes that can be added to the output of a Data Rule stage? (Choose two.)

A. Record IDB. System timestampC. Number of violationsD. Job execution ownerE. Percentage of violations

Correct Answer: ABSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 80A DataStage job uses an Inner Join to combine data from two source parallel datasets that were written to disk

Page 27: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

in sort order based on the join key columns. Which two methods could be used to dramatically improveperformance of this job? (Choose two.)

A. Disable job monitoring.B. Set the environment variable $APT_SORT_INSERTION_CHECK_ONLY.C. Unset the Preserve Partitioning flag on the output of each parallel dataset.D. Explicitly specify hash partitioning and sorting on each input to the Join stage.E. Add a parallel sort stage before each Join input, specifying the "Don't Sort, Previously Grouped" sort key

mode for each key.

Correct Answer: BESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 81Which statement is true about creating DataStage projects?

A. DataStage projects cannot be created during DataStage installation.B. After installation only DataStage developers can create DataStage projects.C. After installation DataStage projects can be created in DataStage Administrator.D. After installation DataStage projects can be created in the Information Server Web Console.

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 82You are editing the configuration file. Which statement is true about file system efficiency?

A. Using multiple scratch spaces in a logical node improves performance.B. For the resource disk, it is best to use the partition where the engine is installed.C. Creating scratch disk spaces on an internal disk rather than on an NFS drive improves sort performance.D. In a clustered environment where NAS is used for a common storage, you must specify the internal disk for

scratch disk.

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 83The effective use of naming conventions means that objects need to be spaced appropriately on the DataStageDesigner canvas. For stages with multiple links,expanding the icon border can significantly improve readability.This approach takes extra effort at first, so a pattern of work needs to be identified and adopted to helpdevelopment. Which feature of Designer can improve development speed?

A. Palette Layout

Page 28: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

B. Repository LayoutC. Snap to Grid FeatureD. Show Performance Statistics Feature

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 84You have created three parallel jobs (Job A, B and C) in which the output of one job is the input to the other job.You are required to create processing that manages this data relationship of the jobs and provide job levelrestart-ability. What two tasks will accomplish these objectives? (Choose two.)

A. Enable runtime column propagation for the job sequence running the jobs.B. Set the 'Add checkpoints so sequence is restartable' option in the Sequencer job.C. Enable the 'Add checkpoints so container is restartable' option for each of the parallel shared containers.D. Create two parallel shared containers that has the logic of Job B and Job C and include those shared

containers in the correct flow order withinthe design of Job A.E. Create a Sequencer job that has triggered events configured allowing Job A to run first, then Job B to run

when A completes successfully, andthen Job C to run when Job B completes successfully.

Correct Answer: BESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 85In your parallel job design you have selected a parallel shared container to be included. Which area of your jobdesign is required to be configured to use the parallel shared container?

A. List the jobs that will be allowed to use the container.B. Configure your job properties General tab "Allow Container" option to be True.C. Configure the job name reference the parallel shared container originates from.D. Configure the number of input and/or output links to support the parallel shared container.

Correct Answer: DSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 86A job design reads from a complex flat file, performs some transformations on the data, and outputs the resultsto a ISD output stage. How can the parameter values can be passed to this job at run-time?

A. Include the parameter values in the data.B. Change the properties of the information provider.C. Pass the parameter values at the time of the service request.D. Execute a DSSetParam with the values at job execution time.

Page 29: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 87Click on the Exhibit button.

Provided you have enough system resources, what is the maximum number of jobs that could be runningconcurrently in this image?

A. 2B. 3C. 4D. 5

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 88You would like to run a particular processing job within a job sequence for each weekday. What two methodscould be used? (Choose two.)

Page 30: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

A. Set the frequency property in the job scheduler to weekdays only.B. Add the job scheduler stage to the job sequence and set to weekdays only.C. Call a routine in the job sequencer that starts the processing job for each day you would like to process.D. Have a parameter set that contains the days of the week you would like to process and routine to parse the

days of the week contained in the"day" parameter.E. Start Loop and End Loop activity stages on the job sequencer canvas where you loop through the days and

pass a value for each day into thejob via parameter.

Correct Answer: CESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 89Which two commands start a DataStage job named procRows in a project named DSProject. It also passes thevalue 200 to the numRows parameter. (Choose two.)

A. dsjob -run -paramfile numRows=200 DSProject procRowsB. dsjob -run -mode normal -param numRows=200 DSProject procRowsC. dsjob -run -mode validate -param numRows=200 DSProject procRowsD. dsjob -run -mode normal -paramfile numRows=200 DSProject procRowsE. dsjob -run -queue HighPriority -param numRows=200 DSProject procRows

Correct Answer: BESection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 90You have made a copy of your job and made major changes to a job in your project. You now want to identifyall the changes that have been made. What task will allow you to identify these changes?

A. Export the original job to a backup directory.B. Export the modified job to the backup directory.C. Select the job, then right click Compare against.D. Select the job, then right click Cross Project Compare.

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 91What two Information Server tools can be used to create deployment packages for DataStage jobs? (Choosetwo.)

A. istoolB. Information Server ConsoleC. Information Server Manager

Page 31: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

D. InfoSphere Metadata Assset ManagerE. Information Server Web Console - Administration tab

Correct Answer: ACSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 92A job using a one-node configuration file writes to a target Sequential File stage. The target Sequential Filestage has been set to write to three different sequential files. How many instances of the Sequential File stagewill run in parallel?

A. One.B. Two.C. Three.D. The job will abort, because a configuration file with at least three nodes is required.

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 93The data going into the target Sequential File stage is sorted in each partition by the Date field. You want thedata in the sequential file to be sorted by Date. Which collection algorithm should you choose in the SequentialFile stage?

A. AutoB. OrderedC. Sort MergeD. Round Robin

Correct Answer: CSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 94What two computer system resources on the DataStage engine are monitored in the Operations Console?(Choose two.)

A. DiskB. CPUC. MemoryD. Scratch diskE. XMETA database

Correct Answer: BC

Page 32: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

Section: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 95You want to better understand the tsort operators that are inserted into the job flow at runtime. Whichenvironment variable can provide this information?

A. $OSH_DUMPB. $APT_DUMP_SCOREC. $APT_STARTUP_STATUSD. $APT_PM_PLAYER_MEMORY

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

QUESTION 96Click the Exhibit button.

The job in the exhibit runs to successful completion with no warnings. However, a validation test identifies thatthe CUSTOMER_MASTER table is empty. You suspect the cause to be related to the NEW_CUSTOMERStable or a defect in the job. Examining runtime metadata may help you track down the problem. Which twoproduct features will help you isolate the cause of the problem? (Choose two.)

A. Director job monitor tool.B. Designer Performance Analyzer tool.C. Designer "Show performance statistics" option.D. The view data button in the target Connector stage.E. InfoSphere Metadata Asset Manager (IMAM) operational metadata reports.

Correct Answer: ACSection: (none)Explanation

Page 33: C2090-303.Examcollection.Premium.Exam · Exam Name: IBM InfoSphere DataStage v9.1. Certkiller ... E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger

Explanation/Reference:Explanation:

QUESTION 97Which option is required to identify a particular job player processes?Which option is required to identify aparticular job? player processes?

A. Set $APT_DUMP_SCORE to true.B. Set $APT_PM_SHOW_PIDS to true.C. Log onto the server and issue the command "ps -ef | grep ds".D. Use the DataStage Director Job administration screen to display active player processes.

Correct Answer: BSection: (none)Explanation

Explanation/Reference:Explanation:

http://www.gratisexam.com/