oozie tutorial

Upload: ashwani-khurwal

Post on 08-Jan-2016

28 views

Category:

Documents


5 download

DESCRIPTION

Oozie tutorial

TRANSCRIPT

0 DefinitionsAction:An execution/computation task (Map-Reduce job, Pig job, a shell command). It can also be referred as task or 'action node'.Workflow:A collection of actions arranged in a control dependency DAG (Direct Acyclic Graph). "control dependency" from one action to another means that the second action can't run until the first action has completed.Workflow Definition:A programmatic description of a workflow that can be executed.Workflow Definition Language:The language used to define a Workflow Definition.Workflow Job:An executable instance of a workflow definition.Workflow Engine:A system that executes workflows jobs. It can also be referred as a DAG engine.1 Specification HighlightsA Workflow application is DAG that coordinates the following types of actions: Hadoop, Pig, and sub-workflows.Flow control operations within the workflow applications can be done using decision, fork and join nodes. Cycles in workflows are not supported.Actions and decisions can be parameterized with job properties, actions output (i.e. Hadoop counters) and file information (file exists, file size, etc). Formal parameters are expressed in the workflow definition as${VAR}variables.A Workflow application is a ZIP file that contains the workflow definition (an XML file), all the necessary files to run all the actions: JAR files for Map/Reduce jobs, shells for streaming Map/Reduce jobs, native libraries, Pig scripts, and other resource files.Before running a workflow job, the corresponding workflow application must be deployed in Oozie.Deploying workflow application and running workflow jobs can be done via command line tools, a WS API and a Java API.Monitoring the system and workflow jobs can be done via a web console, command line tools, a WS API and a Java API.When submitting a workflow job, a set of properties resolving all the formal parameters in the workflow definitions must be provided. This set of properties is a Hadoop configuration.Possible states for a workflow jobs are:PREP,RUNNING,SUSPENDED,SUCCEEDED,KILLEDandFAILED.In the case of a action start failure in a workflow job, depending on the type of failure, Oozie will attempt automatic retries, it will request a manual retry or it will fail the workflow job.Oozie can make HTTP callback notifications on action start/end/failure events and workflow end/failure events.In the case of workflow job failure, the workflow job can be resubmitted skipping previously completed actions. Before doing a resubmission the workflow application could be updated with a patch to fix a problem in the workflow application code.2 Workflow DefinitionA workflow definition is a DAG with control flow nodes (start, end, decision, fork, join, kill) or action nodes (map-reduce, pig, etc.), nodes are connected by transitions arrows.The workflow definition language is XML based and it is called hPDL (Hadoop Process Definition Language).Refer to the Appendix A for theOozie Workflow Definition XML Schema. Appendix B hasWorkflow Definition Examples.2.1 Cycles in Workflow DefinitionsOozie does not support cycles in workflow definitions, workflow definitions must be a strict DAG.At workflow application deployment time, if Oozie detects a cycle in the workflow definition it must fail the deployment.3 Workflow NodesWorkflow nodes are classified in control flow nodes and action nodes: Control flow nodes:nodes that control the start and end of the workflow and workflow job execution path. Action nodes:nodes that trigger the execution of a computation/processing task.Node names and transitions must be conform to the following pattern =[a-zA-Z][\-_a-zA-Z0-0]*=, of up to 20 characters long.3.1 Control Flow NodesControl flow nodes define the beginning and the end of a workflow (thestart,endandkillnodes) and provide a mechanism to control the workflow execution path (thedecision,forkandjoinnodes).3.1.1 Start Control NodeThestartnode is the entry point for a workflow job, it indicates the first workflow node the workflow job must transition to.When a workflow is started, it automatically transitions to the node specified in thestart.A workflow definition must have onestartnode.Syntax:

... ...

Thetoattribute is the name of first workflow node to execute.Example:

... ...

3.1.2 End Control NodeTheendnode is the end for a workflow job, it indicates that the workflow job has completed successfully.When a workflow job reaches theendit finishes successfully (SUCCEEDED).If one or more actions started by the workflow job are executing when theendnode is reached, the actions will be killed. In this scenario the workflow job is still considered as successfully run.A workflow definition must have oneendnode.Syntax:

... ...

Thenameattribute is the name of the transition to do to end the workflow job.Example:

...

3.1.3 Kill Control NodeThekillnode allows a workflow job to kill itself.When a workflow job reaches thekillit finishes in error (KILLED).If one or more actions started by the workflow job are executing when thekillnode is reached, the actions will be killed.A workflow definition may have zero or morekillnodes.Syntax:

... [MESSAGE-TO-LOG] ...

Thenameattribute in thekillnode is the name of the Kill action node.The content of themessageelement will be logged as the kill reason for the workflow job.Akillnode does not have transition elements because it ends the workflow job, asKILLED.Example:

... Input unavailable ...

3.1.4 Decision Control NodeAdecisionnode enables a workflow to make a selection on the execution path to follow.The behavior of adecisionnode can be seen as a switch-case statement.Adecisionnode consists of a list of predicates-transition pairs plus a default transition. Predicates are evaluated in order or appearance until one of them evaluates totrueand the corresponding transition is taken. If none of the predicates evaluates totruethedefaulttransition is taken.Predicates are JSP Expression Language (EL) expressions (refer to section 4.2 of this document) that resolve into a boolean value,trueorfalse. For example: ${fs:fileSize('/usr/foo/myinputdir') gt 10 * GB}Syntax:

... [PREDICATE] ... [PREDICATE] ...

Thenameattribute in thedecisionnode is the name of the decision node.Eachcaseelements contains a predicate an a transition name. The predicate ELs are evaluated in order until one returnstrueand the corresponding transition is taken.Thedefaultelement indicates the transition to take if none of the predicates evaluates totrue.All decision nodes must have adefaultelement to avoid bringing the workflow into an error state if none of the predicates evaluates to true.Example:

... ${fs:fileSize(secondjobOutputDir) gt 10 * GB} ${fs:filSize(secondjobOutputDir) lt 100 * MB} ${ hadoop:counters('secondjob')[RECORDS][REDUCE_OUT] lt 1000000 } ...

3.1.5 Fork and Join Control NodesAforknode splits one path of execution into multiple concurrent paths of execution.Ajoinnode waits until every concurrent execution path of a previousforknode arrives to it.Theforkandjoinnodes must be used in pairs. Thejoinnode assumes concurrent execution paths are children of the sameforknode.Syntax:

... ... ... ...

Thenameattribute in theforknode is the name of the workflow fork node. Thestartattribute in thepathelements in theforknode indicate the name of the workflow node that will be part of the concurrent execution paths.Thenameattribute in thejoinnode is the name of the workflow join node. Thetoattribute in thejoinnode indicates the name of the workflow node that will executed after all concurrent execution paths of the corresponding fork arrive to the join node.Example:

... foo:8021 bar:8020 job1.xml foo:8021 bar:8020 job2.xml ...

By default, Oozie performs some validation that any forking in a workflow is valid and won't lead to any incorrect behavior or instability. However, if Oozie is preventing a workflow from being submitted and you are very certain that it should work, you can disable forkjoin validation so that Oozie will accept the workflow. To disable this validation just for a specific workflow, simply setoozie.wf.validate.ForkJointofalsein the job.properties file. To disable this validation for all workflows, simply set =oozie.validate.ForkJoin= tofalsein the oozie-site.xml file. Disabling this validation is determined by the AND of both of these properties, so it will be disabled if either or both are set to false and only enabled if both are set to true (or not specified).3.2 Workflow Action NodesAction nodes are the mechanism by which a workflow triggers the execution of a computation/processing task.3.2.1 Action BasisThe following sub-sections define common behavior and capabilities for all action types.3.2.1.1 Action Computation/Processing Is Always RemoteAll computation/processing tasks triggered by an action node are remote to Oozie. No workflow application specific computation/processing task is executed within Oozie.3.2.1.2 Actions Are AsynchronousAll computation/processing tasks triggered by an action node are executed asynchronously by Oozie. For most types of computation/processing tasks triggered by workflow action, the workflow job has to wait until the computation/processing task completes before transitioning to the following node in the workflow.The exception is thefsaction that is handled as a synchronous action.Oozie can detect completion of computation/processing tasks by two different means, callbacks and polling.When a computation/processing tasks is started by Oozie, Oozie provides a unique callback URL to the task, the task should invoke the given URL to notify its completion.For cases that the task failed to invoke the callback URL for any reason (i.e. a transient network failure) or when the type of task cannot invoke the callback URL upon completion, Oozie has a mechanism to poll computation/processing tasks for completion.3.2.1.3 Actions Have 2 Transitions, =ok= and =error=If a computation/processing task -triggered by a workflow- completes successfully, it transitions took.If a computation/processing task -triggered by a workflow- fails to complete successfully, its transitions toerror.If a computation/processing task exits in error, there computation/processing task must provideerror-codeanderror-messageinformation to Oozie. This information can be used fromdecisionnodes to implement a fine grain error handling at workflow application level.Each action type must clearly define all the error codes it can produce.3.2.1.4 Action RecoveryOozie provides recovery capabilities when starting or ending actions.Once an action starts successfully Oozie will not retry starting the action if the action fails during its execution. The assumption is that the external system (i.e. Hadoop) executing the action has enough resilience to recovery jobs once it has started (i.e. Hadoop task retries).Depending on the nature of the failure, Oozie will have different recovery strategies.If the failure is of transient nature, Oozie will perform retries after a pre-defined time interval. The number of retries and timer interval for a type of action must be pre-configured at Oozie level. Workflow jobs can override such configuration.Examples of a transient failures are network problems or a remote system temporary unavailable.If the failure is of non-transient nature, Oozie will suspend the workflow job until an manual or programmatic intervention resumes the workflow job and the action start or end is retried. It is the responsibility of an administrator or an external managing system to perform any necessary cleanup before resuming the workflow job.If the failure is an error and a retry will not resolve the problem, Oozie will perform the error transition for the action.3.2.2 Map-Reduce ActionThemap-reduceaction starts a Hadoop map/reduce job from a workflow. Hadoop jobs can be Java Map/Reduce jobs or streaming jobs.Amap-reduceaction can be configured to perform file system cleanup and directory creation before starting the map reduce job. This capability enables Oozie to retry a Hadoop job in the situation of a transient failure (Hadoop checks the non-existence of the job output directory and then creates it when the Hadoop job is starting, thus a retry without cleanup of the job output directory would fail).The workflow job will wait until the Hadoop map/reduce job completes before continuing to the next action in the workflow execution path.The counters of the Hadoop job and job exit status (=FAILED=,KILLEDorSUCCEEDED) must be available to the workflow job after the Hadoop jobs ends. This information can be used from within decision nodes and other actions configurations.Themap-reduceaction has to be configured with all the necessary Hadoop JobConf properties to run the Hadoop map/reduce job.Hadoop JobConf properties can be specified in a JobConf XML file bundled with the workflow application or they can be indicated inline in themap-reduceaction configuration.The configuration properties are loaded in the following order,streaming,job-xmlandconfiguration, and later values override earlier values.Streaming and inline property values can be parameterized (templatized) using EL expressions.The Hadoopmapred.job.trackerandfs.default.nameproperties must not be present in the job-xml and inline configuration.3.2.2.1 Adding Files and Archives for the JobThefile,archiveelements make available, to map-reduce jobs, files and archives. If the specified path is relative, it is assumed the file or archiver are within the application directory, in the corresponding sub-path. If the path is absolute, the file or archive it is expected in the given absolute path.Files specified with thefileelement, will be symbolic links in the home directory of the task.If a file is a native library (an '.so' or a '.so.#' file), it will be symlinked as and '.so' file in the task running directory, thus available to the task JVM.To force a symlink for a file on the task running directory, use a '#' followed by the symlink name. For example 'mycat.sh#cat'.Refer to Hadoop distributed cache documentation for details more details on files and archives.3.2.2.2 StreamingStreaming information can be specified in thestreamingelement.Themapperandreducerelements are used to specify the executable/script to be used as mapper and reducer.User defined scripts must be bundled with the workflow application and they must be declared in thefileselement of the streaming configuration. If the are not declared in thefileselement of the configuration it is assumed they will be available (and in the command PATH) of the Hadoop slave machines.Some streaming jobs require Files found on HDFS to be available to the mapper/reducer scripts. This is done using thefileandarchiveelements described in the previous section.The Mapper/Reducer can be overridden by amapred.mapper.classormapred.reducer.classproperties in thejob-xmlfile orconfigurationelements.3.2.2.3 PipesPipes information can be specified in thepipeselement.A subset of the command line options which can be used while using the Hadoop Pipes Submitter can be specified via elements -map,reduce,inputformat,partitioner,writer,program.Theprogramelement is used to specify the executable/script to be used.User defined program must be bundled with the workflow application.Some pipe jobs require Files found on HDFS to be available to the mapper/reducer scripts. This is done using thefileandarchiveelements described in the previous section.Pipe properties can be overridden by specifying them in thejob-xmlfile orconfigurationelement.3.2.2.4 Syntax

... [JOB-TRACKER] [NAME-NODE] ... ... [MAPPER-PROCESS] [REDUCER-PROCESS] [RECORD-READER-CLASS] [NAME=VALUE] ... [NAME=VALUE] ... [MAPPER] [REDUCER] [INPUTFORMAT] [PARTITIONER] [OUTPUTFORMAT] [EXECUTABLE] [JOB-XML-FILE] [PROPERTY-NAME] [PROPERTY-VALUE] ... [FILE-PATH] ... [FILE-PATH] ... ...

Theprepareelement, if present, indicates a list of path do delete before starting the job. This should be used exclusively for directory cleanup for the job to be executed. The delete operation will be performed in thefs.default.namefilesystem.Thejob-xmlelement, if present, must refer to a Hadoop JobConfjob.xmlfile bundled in the workflow application. Thejob-xmlelement is optional and as of schema 0.4, multiplejob-xmlelements are allowed in order to specify multiple Hadoop JobConfjob.xmlfiles.Theconfigurationelement, if present, contains JobConf properties for the Hadoop job.Properties specified in theconfigurationelement override properties specified in the file specified in thejob-xmlelement.External Stats can be turned on/off by specifying the propertyoozie.action.external.stats.writeastrueorfalsein the configuration element of workflow.xml. The default value for this property isfalse.Thefileelement, if present, must specify the target sybolic link for binaries by separating the original file and target with a # (file#target-sym-link). This is not required for libraries.Themapperandreducerprocess for streaming jobs, should specify the executable command with URL encoding. e.g. '%' should be replaced by '%25'.Example:

... foo:8021 bar:8020 /myfirstjob.xml mapred.input.dir /usr/tucu/input-data mapred.output.dir /usr/tucu/input-data mapred.reduce.tasks ${firstJobReducers} oozie.action.external.stats.write true ...

In the above example, the number of Reducers to be used by the Map/Reduce job has to be specified as a parameter of the workflow job configuration when creating the workflow job.Streaming Example:

... foo:8021 bar:8020 /bin/bash testarchive/bin/mapper.sh testfile /bin/bash testarchive/bin/reducer.sh mapred.input.dir ${input} mapred.output.dir ${output} stream.num.map.output.key.fields 3 /users/blabla/testfile.sh#testfile /users/blabla/testarchive.jar#testarchive ...

Pipes Example:

... foo:8021 bar:8020 bin/wordcount-simple#wordcount-simple mapred.input.dir ${input} mapred.output.dir ${output} /users/blabla/testarchive.jar#testarchive ...

3.2.3 Pig ActionThepigaction starts a Pig job.The workflow job will wait until the pig job completes before continuing to the next action.Thepigaction has to be configured with the job-tracker, name-node, pig script and the necessary parameters and configuration to run the Pig job.Apigaction can be configured to perform HDFS files/directories cleanup before starting the Pig job. This capability enables Oozie to retry a Pig job in the situation of a transient failure (Pig creates temporary directories for intermediate data, thus a retry without cleanup would fail).Hadoop JobConf properties can be specified in a JobConf XML file bundled with the workflow application or they can be indicated inline in thepigaction configuration.The configuration properties are loaded in the following order,job-xmlandconfiguration, and later values override earlier values.Inline property values can be parameterized (templatized) using EL expressions.The Hadoopmapred.job.trackerandfs.default.nameproperties must not be present in the job-xml and inline configuration.As with Hadoop map-reduce jobs, it is possible to add files and archives to be available to the Pig job, refer to section [#FilesAchives][Adding Files and Archives for the Job].Syntax for Pig actions in Oozie schema 0.2:

... [JOB-TRACKER] [NAME-NODE] ... ... [JOB-XML-FILE] [PROPERTY-NAME] [PROPERTY-VALUE] ... [PIG-SCRIPT] [PARAM-VALUE] ... [PARAM-VALUE] [ARGUMENT-VALUE] ... [ARGUMENT-VALUE] [FILE-PATH] ... [FILE-PATH] ... ...

Syntax for Pig actions in Oozie schema 0.1:

... [JOB-TRACKER] [NAME-NODE] ... ... [JOB-XML-FILE] [PROPERTY-NAME] [PROPERTY-VALUE] ... [PIG-SCRIPT] [PARAM-VALUE] ... [PARAM-VALUE] [FILE-PATH] ... [FILE-PATH] ... ...

Theprepareelement, if present, indicates a list of path do delete before starting the job. This should be used exclusively for directory cleanup for the job to be executed.Thejob-xmlelement, if present, must refer to a Hadoop JobConfjob.xmlfile bundled in the workflow application. Thejob-xmlelement is optional and as of schema 0.4, multiplejob-xmlelements are allowed in order to specify multiple Hadoop JobConfjob.xmlfiles.Theconfigurationelement, if present, contains JobConf properties for the underlying Hadoop jobs.Properties specified in theconfigurationelement override properties specified in the file specified in thejob-xmlelement.External Stats can be turned on/off by specifying the propertyoozie.action.external.stats.writeastrueorfalsein the configuration element of workflow.xml. The default value for this property isfalse.The inline and job-xml configuration properties are passed to the Hadoop jobs submitted by Pig runtime.Thescriptelement contains the pig script to execute. The pig script can be templatized with variables of the form${VARIABLE}. The values of these variables can then be specified using theparamselement.NOTE: Oozie will perform the parameter substitution before firing the pig job. This is different from theparameter substitution mechanism provided by Pig, which has a few limitations.Theparamselement, if present, contains parameters to be passed to the pig script.In Oozie schema 0.2:Theargumentselement, if present, contains arguments to be passed to the pig script.All the above elements can be parameterized (templatized) using EL expressions.Example for Oozie schema 0.2:

... foo:8021 bar:8020 mapred.compress.map.output true oozie.action.external.stats.write true /mypigscript.pig -param INPUT=${inputDir} -param OUTPUT=${outputDir}/pig-output3 ...

Example for Oozie schema 0.1:

... foo:8021 bar:8020 mapred.compress.map.output true /mypigscript.pig InputDir=/home/tucu/input-data OutputDir=${jobOutput} ...

3.2.4 Fs (HDFS) actionThefsaction allows to manipulate files and directories in HDFS from a workflow application. The supported commands aremove,delete,mkdir,chmod,touchzandchgrp.The FS commands are executed synchronously from within the FS action, the workflow job will wait until the specified file commands are completed before continuing to the next action.Path names specified in thefsaction can be parameterized (templatized) using EL expressions.Each file path must specify the file system URI, for move operations, the target must not specified the system URI.IMPORTANT: All the commands withinfsaction do not happen atomically, if afsaction fails half way in the commands being executed, successfully executed commands are not rolled back. Thefsaction, before executing any command must check that source paths exist and target paths don't exist (constraint regarding target relaxed for themoveaction. See below for details), thus failing before executing any command. Therefore the validity of all paths specified in onefsaction are evaluated before any of the file operation are executed. Thus there is less chance of an error occurring while thefsaction executes.Syntax:

... ... ... ... ... ... ...

Thedeletecommand deletes the specified path, if it is a directory it deletes recursively all its content and then deletes the directory.Themkdircommand creates the specified directory, it creates all missing directories in the path. If the directory already exist it does a no-op.In themovecommand thesourcepath must exist. The following scenarios are addressed for amove: The file system URI(e.g.hdfs://{nameNode})can be skipped in thetargetpath. It is understood to be the same as that of the source. But if the target path does contain the system URI, it cannot be different than that of the source. The parent directory of thetargetpath must exist For thetargetpath, if it is a file, then it must not already exist. However, if thetargetpath is an already existing directory, themoveaction will place yoursourceas a child of thetargetdirectory.Thechmodcommand changes the permissions for the specified path. Permissions can be specified using the Unix Symbolic representation (e.g. -rwxrw-rw-) or an octal representation (755). When doing achmodcommand on a directory, by default the command is applied to the directory and the files one level within the directory. To apply thechmodcommand to the directory, without affecting the files within it, thedir-filesattribute must be set tofalse. To apply thechmodcommand recursively to all levels within a directory, put arecursiveelement inside theelement.Thetouchzcommand creates a zero length file in the specified path if none exists. If one already exists, then touchz will perform a touch operation. Touchz works only for absolute paths.Thechgrpcommand changes the group for the specified path. When doing achgrpcommand on a directory, by default the command is applied to the directory and the files one level within the directory. To apply thechgrpcommand to the directory, without affecting the files within it, thedir-filesattribute must be set tofalse. To apply thechgrpcommand recursively to all levels within a directory, put arecursiveelement inside theelement.Example:

... ...

In the above example, a directory named after the workflow job ID is created and the input of the job, passed as workflow configuration parameter, is archived under the previously created directory.As of schema 0.4, if aname-nodeelement is specified, then it is not necessary for any of the paths to start with the file system URI as it is taken from thename-nodeelement. This is also true if the name-node is specified in the global section (seeGlobal Configurations)As of schema 0.4, zero or morejob-xmlelements can be specified; these must refer to Hadoop JobConfjob.xmlformatted files bundled in the workflow application. They can be used to set additional properties for the FileSystem instance.As of schema 0.4, if aconfigurationelement is specified, then it will also be used to set additional JobConf properties for the FileSystem instance. Properties specified in theconfigurationelement override properties specified in the files specified by anyjob-xmlelements.Example:

... hdfs://foo:8020 fs-info.xml some.property some.value ...

3.2.5 Ssh ActionNOTE: SSH actions are deprecated in Oozie schema 0.1, and removed in Oozie schema 0.2Thesshaction starts a shell command on a remote machine as a remote secure shell in background. The workflow job will wait until the remote shell command completes before continuing to the next action.The shell command must be present in the remote machine and it must be available for execution via the command path.The shell command is executed in the home directory of the specified user in the remote host.The output (STDOUT) of the ssh job can be made available to the workflow job after the ssh job ends. This information could be used from within decision nodes. If the output of the ssh job is made available to the workflow job the shell command must follow the following requirements: The format of the output must be a valid Java Properties file. The size of the output must not exceed 2KB.Syntax:

... [USER]@[HOST] [SHELL] [ARGUMENTS] ... ...

Thehostindicates the user and host where the shell will be executed.Thecommandelement indicates the shell command to execute.Theargselement, if present, contains parameters to be passed to the shell command. If more than oneargselement is present they are concatenated in order.If thecapture-outputelement is present, it indicates Oozie to capture output of the STDOUT of the ssh command execution. The ssh command output must be in Java Properties file format and it must not exceed 2KB. From within the workflow definition, the output of an ssh action node is accessible via theString action:output(String node, String key)function (Refer to section '4.2.6 Action EL Functions').The configuration of thesshaction can be parameterized (templatized) using EL expressions.Example:

... [email protected] uploaddata jdbc:derby://bar.com:1527/myDB hdfs://foobar.com:8020/usr/tucu/myData ...

In the above example, theuploaddatashell command is executed with two arguments,jdbc:derby://foo.com:1527/myDBandhdfs://foobar.com:8020/usr/tucu/myData.Theuploaddatashell must be available in the remote host and available in the command path.The output of the command will be ignored because thecapture-outputelement is not present.3.2.6 Sub-workflow ActionThesub-workflowaction runs a child workflow job, the child workflow job can be in the same Oozie system or in another Oozie system.The parent workflow job will wait until the child workflow job has completed.Syntax:

... [WF-APPLICATION-PATH] [PROPERTY-NAME] [PROPERTY-VALUE] ... ...

The child workflow job runs in the same Oozie system instance where the parent workflow job is running.Theapp-pathelement specifies the path to the workflow application of the child workflow job.Thepropagate-configurationflag, if present, indicates that the workflow job configuration should be propagated to the child workflow.Theconfigurationsection can be used to specify the job properties that are required to run the child workflow job.The configuration of thesub-workflowaction can be parameterized (templatized) using EL expressions.Example:

... child-wf input.dir ${wf:id()}/second-mr-output ...

In the above example, the workflow definition with the namechild-wfwill be run on the Oozie instance at.http://myhost:11000/oozie. The specified workflow application must be already deployed on the target Oozie instance.A configuration parameterinput.diris being passed as job property to the child workflow job.The subworkflow can inherit the lib jars from the parent workflow by settingoozie.subworkflow.classpath.inheritanceto true in oozie-site.xml or on a per-job basis by settingoozie.wf.subworkflow.classpath.inheritanceto true in a job.properties file. If both are specified,oozie.wf.subworkflow.classpath.inheritancehas priority. If the subworkflow and the parent have conflicting jars, the subworkflow's jar has priority. By default,oozie.wf.subworkflow.classpath.inheritanceis set to true.3.2.7 Java ActionThejavaaction will execute thepublic static void main(String[] args)method of the specified main Java class.Java applications are executed in the Hadoop cluster as map-reduce job with a single Mapper task.The workflow job will wait until the java application completes its execution before continuing to the next action.Thejavaaction has to be configured with the job-tracker, name-node, main Java class, JVM options and arguments.To indicate anokaction transition, the main Java class must complete gracefully themainmethod invocation.To indicate anerroraction transition, the main Java class must throw an exception.The main Java class must not callSystem.exit(int n)as this will make thejavaaction to do anerrortransition regardless of the used exit code.Ajavaaction can be configured to perform HDFS files/directories cleanup before starting the Java application. This capability enables Oozie to retry a Java application in the situation of a transient or non-transient failure (This can be used to cleanup any temporary data which may have been created by the Java application in case of failure).Ajavaaction can create a Hadoop configuration. The Hadoop configuration is made available as a local file to the Java application in its running directory, the file name isoozie-action.conf.xml. Similar tomap-reduceandpigactions it is possible to refer ajob.xmlfile and using inline configuration properties. For repeated configuration properties later values override earlier ones.Inline property values can be parameterized (templatized) using EL expressions.The Hadoopmapred.job.tracker(=job-tracker=) andfs.default.name(=name-node=) properties must not be present in thejob-xmland in the inline configuration.As withmap-reduceandpigactions, it is possible to add files and archives to be available to the Java application. Refer to section [#FilesAchives][Adding Files and Archives for the Job].Thecapture-outputelement can be used to propagate values back into Oozie context, which can then be accessed via EL-functions. This needs to be written out as a java properties format file. The filename is obtained via a System property specified by the constantJavaMainMapper.OOZIE_JAVA_MAIN_CAPTURE_OUTPUT_FILEIMPORTANT:In order for a Java action to succeed on a secure cluster, it must propagate the Hadoop delegation token like in the following code snippet (this is benign on non-secure clusters):// propagate delegation related props from launcher job to MR jobif (System.getenv("HADOOP_TOKEN_FILE_LOCATION") != null) { jobConf.set("mapreduce.job.credentials.binary", System.getenv("HADOOP_TOKEN_FILE_LOCATION"));}IMPORTANT:Because the Java application is run from within a Map-Reduce job, from Hadoop 0.20. onwards a queue must be assigned to it. The queue name must be specified as a configuration property.Syntax:

... [JOB-TRACKER] [NAME-NODE] ... ... [JOB-XML] [PROPERTY-NAME] [PROPERTY-VALUE] ... [MAIN-CLASS][JAVA-STARTUP-OPTS]ARGUMENT ... [FILE-PATH] ... [FILE-PATH] ... ...

Theprepareelement, if present, indicates a list of path do delete before starting the Java application. This should be used exclusively for directory cleanup for the Java application to be executed.Thejava-optselement, if present, contains the command line parameters which are to be used to start the JVM that will execute the Java application. Using this element is equivalent to use themapred.child.java.optsconfiguration property.Theargelements, if present, contains arguments for the main function. The value of eachargelement is considered a single argument and they are passed to themainmethod in the same order.All the above elements can be parameterized (templatized) using EL expressions.Example:

... foo:8021 bar:8020 mapred.queue.name default org.apache.oozie.MyFirstMainClass -Dblahargument1argument2 ...

3.2.7.1 Overriding an action's Main classThis feature is useful for developers to change the Main classes without having to recompile or redeploy Oozie.For most actions (not just the Java action), you can override the Main class it uses by specifying the followingconfigurationproperty and making sure that your class is included in the workflow's classpath. If you specify this in the Java action, the main-class element has priority.

oozie.launcher.action.main.class org.my.CustomMain

Note:Most actions typically pass information to their corresponding Main in specific ways; you should look at the action's existing Main to see how it works before creating your own. In fact, its probably simplest to just subclass the existing Main and add/modify/overwrite any behavior you want to change.4 Parameterization of WorkflowsWorkflow definitions can be parameterized.When workflow node is executed by Oozie all the ELs are resolved into concrete values.The parameterization of workflow definitions it done using JSP Expression Language syntax from theJSP 2.0 Specification (JSP.2.3), allowing not only to support variables as parameters but also functions and complex expressions.EL expressions can be used in the configuration values of action and decision nodes. They can be used in XML attribute values and in XML element and attribute values.They cannot be used in XML element and attribute names. They cannot be used in the name of a node and they cannot be used within thetransitionelements of a node.4.1 Workflow Job Properties (or Parameters)When a workflow job is submitted to Oozie, the submitter may specify as many workflow job properties as required (similar to Hadoop JobConf properties).Workflow applications may define default values for the workflow job parameters. They must be defined in aconfig-default.xmlfile bundled with the workflow application archive (refer to section '7 Workflow Applications Packaging'). Workflow job properties have precedence over the default values.Properties that are a valid Java identifier,[A-Za-z_][0-9A-Za-z_]*, are available as '${NAME}' variables within the workflow definition.Properties that are not valid Java Identifier, for example 'job.tracker', are available via theString wf:conf(String name)function. Valid identifier properties are available via this function as well.Using properties that are valid Java identifiers result in a more readable and compact definition.By using properties *Example:*Parameterized Workflow definition:

... ${jobTracker} ${nameNode} mapred.mapper.class com.foo.FirstMapper mapred.reducer.class com.foo.FirstReducer mapred.input.dir ${inputDir} mapred.output.dir ${outputDir} ...

When submitting a workflow job for the workflow definition above, 3 workflow job properties must be specified: jobTracker: inputDir: outputDir:If the above 3 properties are not specified, the job will fail.As of schema 0.4, a list of formal parameters can be provided which will allow Oozie to verify, at submission time, that said properties are actually specified (i.e. before the job is executed and fails). Default values can also be provided.Example:The previous parameterized workflow definition with formal parameters:

inputDir outputDir out-dir ... ${jobTracker} ${nameNode} mapred.mapper.class com.foo.FirstMapper mapred.reducer.class com.foo.FirstReducer mapred.input.dir ${inputDir} mapred.output.dir ${outputDir} ...

In the above example, ifinputDiris not specified, Oozie will print an error message instead of submitting the job. If =outputDir= is not specified, Oozie will use the default value,out-dir.4.2 Expression Language FunctionsOozie, besides allowing the use of workflow job properties to parameterize workflow jobs, it provides a set of build in EL functions that enable a more complex parameterization of workflow action nodes as well as the predicates in decision nodes.4.2.1 Basic EL Constants KB:1024, one kilobyte. MB:1024 * KB, one megabyte. GB:1024 * MB, one gigabyte. TB:1024 * GB, one terabyte. PB:1024 * TG, one petabyte.All the above constants are of typelong.4.2.2 Basic EL FunctionsString firstNotNull(String value1, String value2)It returns the first notnullvalue, ornullif both arenull.Note that if the output of this function isnulland it is used as string, the EL library converts it to an empty string. This is the common behavior when usingfirstNotNull()in node configuration sections.String concat(String s1, String s2)It returns the concatenation of 2 strings. A string withnullvalue is considered as an empty string.String replaceAll(String src, String regex, String replacement)Replace each occurrence of regular expression match in the first string with thereplacementstring and return the replaced string. A 'regex' string withnullvalue is considered as no change. A 'replacement' string withnullvalue is consider as an empty string.String appendAll(String src, String append, String delimeter)Add theappendstring into each splitted sub-strings of the first string(=src=). The split is performed intosrcstring using thedelimiter. E.g.appendAll("/a/b/,/c/b/,/c/d/", "ADD", ",")will return/a/b/ADD,/c/b/ADD,/c/d/ADD. Aappendstring withnullvalue is consider as an empty string. Adelimiterstring with valuenullis considered as no append in the string.String trim(String s)It returns the trimmed value of the given string. A string withnullvalue is considered as an empty string.String urlEncode(String s)It returns the URL UTF-8 encoded value of the given string. A string withnullvalue is considered as an empty string.String timestamp()It returns the UTC current date and time in W3C format down to the second (YYYY-MM-DDThh:mm:ss.sZ). I.e.: 1997-07-16T19:20:30.45ZString toJsonStr(Map)(since Oozie 3.3)It returns an XML encoded JSON representation of a Map. This function is useful to encode as a single property the complete action-data of an action,wf:actionData(String actionName), in order to pass it in full to another action.String toPropertiesStr(Map)(since Oozie 3.3)It returns an XML encoded Properties representation of a Map. This function is useful to encode as a single property the complete action-data of an action,wf:actionData(String actionName), in order to pass it in full to another action.String toConfigurationStr(Map)(since Oozie 3.3)It returns an XML encoded Configuration representation of a Map. This function is useful to encode as a single property the complete action-data of an action,wf:actionData(String actionName), in order to pass it in full to another action.4.2.3 Workflow EL FunctionsString wf:id()It returns the workflow job ID for the current workflow job.String wf:name()It returns the workflow application name for the current workflow job.String wf:appPath()It returns the workflow application path for the current workflow job.String wf:conf(String name)It returns the value of the workflow job configuration property for the current workflow job, or an empty string if undefined.String wf:user()It returns the user name that started the current workflow job.String wf:group()It returns the group/ACL for the current workflow job.String wf:callback(String stateVar)It returns the callback URL for the current workflow action node,stateVarcan be a valid exit state (=OK= orERROR) for the action or a token to be replaced with the exit state by the remote system executing the task.String wf:transition(String node)It returns the transition taken by the specified workflow action node, or an empty string if the action has not being executed or it has not completed yet.String wf:lastErrorNode()It returns the name of the last workflow action node that exit with anERRORexit state, or an empty string if no a ction has exited withERRORstate in the current workflow job.String wf:errorCode(String node)It returns the error code for the specified action node, or an empty string if the action node has not exited withERRORstate.Each type of action node must define its complete error code list.String wf:errorMessage(String message)It returns the error message for the specified action node, or an empty string if no action node has not exited withERRORstate.The error message can be useful for debugging and notification purposes.int wf:run()It returns the run number for the current workflow job, normally0unless the workflow job is re-run, in which case indicates the current run.Mapwf:actionData(String node)This function is only applicable to action nodes that produce output data on completion.The output data is in a Java Properties format and via this EL function it is available as aMap.int wf:actionExternalId(String node)It returns the external Id for an action node, or an empty string if the action has not being executed or it has not completed yet.int wf:actionTrackerUri(String node)It returns the tracker URIfor an action node, or an empty string if the action has not being executed or it has not completed yet.int wf:actionExternalStatus(String node)It returns the external status for an action node, or an empty string if the action has not being executed or it has not completed yet.4.2.4 Hadoop EL Constants RECORDS:Hadoop record counters group name. MAP_IN:Hadoop mapper input records counter name. MAP_OUT:Hadoop mapper output records counter name. REDUCE_IN:Hadoop reducer input records counter name. REDUCE_OUT:Hadoop reducer input record counter name. GROUPS:1024 * Hadoop mapper/reducer record groups counter name.4.2.5 Hadoop EL Functions#HadoopCountersEL *Map < String, Map < String, Long > > hadoop:counters(String node)*It returns the counters for a job submitted by a Hadoop action node. It returns0if the if the Hadoop job has not started yet and for undefined counters.The outer Map key is a counter group name. The inner Map value is a Map of counter names and counter values.The Hadoop EL constants defined in the previous section provide access to the Hadoop built in record counters.This function can also be used to access specific action statistics information. Examples of action stats and their usage through EL Functions (referenced in workflow xml) are given below.Example of MR action stats:{ "ACTION_TYPE": "MAP_REDUCE", "org.apache.hadoop.mapred.JobInProgress$Counter": { "TOTAL_LAUNCHED_REDUCES": 1, "TOTAL_LAUNCHED_MAPS": 1, "DATA_LOCAL_MAPS": 1 }, "FileSystemCounters": { "FILE_BYTES_READ": 1746, "HDFS_BYTES_READ": 1409, "FILE_BYTES_WRITTEN": 3524, "HDFS_BYTES_WRITTEN": 1547 }, "org.apache.hadoop.mapred.Task$Counter": { "REDUCE_INPUT_GROUPS": 33, "COMBINE_OUTPUT_RECORDS": 0, "MAP_INPUT_RECORDS": 33, "REDUCE_SHUFFLE_BYTES": 0, "REDUCE_OUTPUT_RECORDS": 33, "SPILLED_RECORDS": 66, "MAP_OUTPUT_BYTES": 1674, "MAP_INPUT_BYTES": 1409, "MAP_OUTPUT_RECORDS": 33, "COMBINE_INPUT_RECORDS": 0, "REDUCE_INPUT_RECORDS": 33 }}Below is the workflow that describes how to access specific information using hadoop:counters() EL function from the MR stats. *Workflow xml:*

${jobTracker} ${nameNode} mapred.job.queue.name ${queueName} mapred.mapper.class org.apache.oozie.example.SampleMapper mapred.reducer.class org.apache.oozie.example.SampleReducer mapred.map.tasks 1 mapred.input.dir /user/${wf:user()}/${examplesRoot}/input-data/text mapred.output.dir /user/${wf:user()}/${examplesRoot}/output-data/${outputDir} oozie.action.external.stats.writetrue ${jobTracker} ${nameNode} mapred.job.queue.name ${queueName} MyTest ${hadoop:counters("mr-node")["FileSystemCounters"]["FILE_BYTES_READ"]} Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]

Example of Pig action stats:{ "ACTION_TYPE": "PIG", "JOB_GRAPH": "job_201112191708_0008", "PIG_VERSION": "0.9.0", "FEATURES": "UNKNOWN", "ERROR_MESSAGE": null, "NUMBER_JOBS": "1", "RECORD_WRITTEN": "33", "BYTES_WRITTEN": "1410", "HADOOP_VERSION": "0.20.2", "SCRIPT_ID": "bbe016e9-f678-43c3-96fc-f24359957582", "PROACTIVE_SPILL_COUNT_RECORDS": "0", "PROACTIVE_SPILL_COUNT_OBJECTS": "0", "RETURN_CODE": "0", "ERROR_CODE": "-1", "SMM_SPILL_COUNT": "0", "DURATION": "36850", "job_201112191708_0008": { "MAP_INPUT_RECORDS": "33", "MIN_REDUCE_TIME": "0", "MULTI_STORE_COUNTERS": {}, "MAX_REDUCE_TIME": "0", "NUMBER_REDUCES": "0", "ERROR_MESSAGE": null, "RECORD_WRITTEN": "33", "HDFS_BYTES_WRITTEN": "1410", "JOB_ID": "job_201112191708_0008", "REDUCE_INPUT_RECORDS": "0", "AVG_REDUCE_TIME": "0", "MAX_MAP_TIME": "9169", "BYTES_WRITTEN": "1410", "Alias": "A,B", "REDUCE_OUTPUT_RECORDS": "0", "SMMS_SPILL_COUNT": "0", "PROACTIVE_SPILL_COUNT_RECS": "0", "PROACTIVE_SPILL_COUNT_OBJECTS": "0", "HADOOP_COUNTERS": null, "MIN_MAP_TIME": "9169", "MAP_OUTPUT_RECORDS": "33", "AVG_MAP_TIME": "9169", "FEATURE": "MAP_ONLY", "NUMBER_MAPS": "1" }}Below is the workflow that describes how to access specific information using hadoop:counters() EL function from the Pig stats. *Workflow xml:*

${jobTracker} ${nameNode} mapred.job.queue.name ${queueName} mapred.compress.map.output true oozie.action.external.stats.writetrue id.pig INPUT=/user/${wf:user()}/${examplesRoot}/input-data/text OUTPUT=/user/${wf:user()}/${examplesRoot}/output-data/pig ${jobTracker} ${nameNode} mapred.job.queue.name ${queueName} MyTest ${hadoop:counters("pig-node")["JOB_GRAPH"]} Pig failed, error message[${wf:errorMessage(wf:lastErrorNode())}]

4.2.6 Hadoop Jobs EL FunctionThe functionwf:actionData()can be used to access Hadoop ID's for actions such as Pig, by specifying the key ashadoopJobs. An example is shown below.

${jobTracker} ${nameNode} mapred.job.queue.name ${queueName} mapred.compress.map.output true id.pig INPUT=/user/${wf:user()}/${examplesRoot}/input-data/text OUTPUT=/user/${wf:user()}/${examplesRoot}/output-data/pig ${jobTracker} ${nameNode} mapred.job.queue.name ${queueName} MyTest ${wf:actionData("pig-node")["hadoopJobs"]} Pig failed, error message[${wf:errorMessage(wf:lastErrorNode())}]

4.2.7 HDFS EL FunctionsFor all the functions in this section the path must include the FS URI. For examplehdfs://foo:8020/user/tucu.boolean fs:exists(String path)It returnstrueorfalsedepending if the specified path URI exists or not.boolean fs:isDir(String path)It returnstrueif the specified path URI exists and it is a directory, otherwise it returnsfalse.boolean fs:dirSize(String path)It returns the size in bytes of all the files in the specified path. If the path is not a directory, or if it does not exist it returns -1. It does not work recursively, only computes the size of the files under the specified path.boolean fs:fileSize(String path)It returns the size in bytes of specified file. If the path is not a file, or if it does not exist it returns -1.boolean fs:blockSize(String path)It returns the block size in bytes of specified file. If the path is not a file, or if it does not exist it returns -1.5 Workflow NotificationsWorkflow jobs can be configured to make an HTTP GET notification upon start and end of a workflow action node and upon the completion of a workflow job.Oozie will make a best effort to deliver the notifications, in case of failure it will retry the notification a pre-configured number of times at a pre-configured interval before giving up.See alsoCoordinator Notifications5.1 Workflow Job Status NotificationIf theoozie.wf.workflow.notification.urlproperty is present in the workflow job properties when submitting the job, Oozie will make a notification to the provided URL when the workflow job changes its status.If the URL contains any of the following tokens, they will be replaced with the actual values by Oozie before making the notification: $jobId: The workflow job ID $status: the workflow current state5.2 Node Start and End NotificationsIf theoozie.wf.action.notification.urlproperty is present in the workflow job properties when submitting the job, Oozie will make a notification to the provided URL every time the workflow job enters and exits an action node. For decision nodes, Oozie will send a single notification with the name of the first evaluation that resolved totrue.If the URL contains any of the following tokens, they will be replaced with the actual values by Oozie before making the notification: $jobId: The workflow job ID $nodeName: The name of the workflow node $status: If the action has not completed yet, it contains the action status 'S:'. If the action has ended, it contains the action transition 'T:'6 User PropagationWhen submitting a workflow job, the configuration must contain auser.nameproperty. If security is enabled, Oozie must ensure that the value of theuser.nameproperty in the configuration match the user credentials present in the protocol (web services) request.When submitting a workflow job, the configuration may contain theoozie.job.aclproperty (thegroup.nameproperty has been deprecated). If authorization is enabled, this property is treated as as the ACL for the job, it can contain user and group IDs separated by commas.The specified user and ACL are assigned to the created job.Oozie must propagate the specified user and ACL to the system executing the actions.It is not allowed for map-reduce, pig and fs actions to override user/group information.7 Workflow Application DeploymentWhile Oozie encourages the use of self-contained applications (J2EE application model), it does not enforce it.Workflow applications are installed in an HDFS directory. To submit a job for a workflow application the path to the HDFS directory where the application is must be specified.The layout of a workflow application directory is: - /workflow.xml - /config-default.xml | - /lib/ (*.jar;*.so)A workflow application must contain at least the workflow definition, theworkflow.xmlfile.All configuration files and scripts (Pig and shell) needed by the workflow action nodes should be under the application HDFS directory.All the JAR files and native libraries within the application 'lib/' directory are automatically added to the map-reduce and pig jobsclasspathandLD_PATH.Additional JAR files and native libraries not present in the application 'lib/' directory can be specified in map-reduce and pig actions with the 'file' element (refer to the map-reduce and pig documentation).For Map-Reduce jobs (not including streaming or pipes), additional jar files can also be included via an uber jar. An uber jar is a jar file that contains additional jar files within a "lib" folder. To let Oozie know about an uber jar, simply specify it with theoozie.mapreduce.uber.jarconfiguration property and Oozie will tell Hadoop MapReduce that it is an uber jar. The ability to specify an uber jar is governed by theoozie.action.mapreduce.uber.jar.enableproperty inoozie-site.xml. SeeOozie Installfor more information.

${jobTracker} ${nameNode} oozie.mapreduce.uber.jar ${nameNode}/user/${wf:user()}/my-uber-jar.jar

Theconfig-default.xmlfile defines, if any, default values for the workflow job parameters. This file must be in the Hadoop Configuration XML format. EL expressions are not supported anduser.nameproperty cannot be specified in this file.Any other resources likejob.xmlfiles referenced from a workflow action action node must be included under the corresponding path, relative paths always start from the root of the workflow application.8 External Data AssumptionsOozie runs workflow jobs under the assumption all necessary data to execute an action is readily available at the time the workflow job is about to executed the action.In addition, it is assumed, but it is not the responsibility of Oozie, that all input data used by a workflow job is immutable for the duration of the workflow job.9 Workflow Jobs LifecycleA workflow job can have be in any of the following states:PREP:When a workflow job is first create it will be inPREPstate. The workflow job is defined but it is not running.RUNNING:When aCREATEDworkflow job is started it goes intoRUNNINGstate, it will remain inRUNNINGstate while it does not reach its end state, ends in error or it is suspended.SUSPENDED:ARUNNINGworkflow job can be suspended, it will remain inSUSPENDEDstate until the workflow job is resumed or it is killed.SUCCEEDED:When aRUNNINGworkflow job reaches theendnode it ends reaching theSUCCEEDEDfinal state.KILLED:When aCREATED,RUNNINGorSUSPENDEDworkflow job is killed by an administrator or the owner via a request to Oozie the workflow job ends reaching theKILLEDfinal state.FAILED:When aRUNNINGworkflow job fails due to an unexpected error it ends reaching theFAILEDfinal state.Workflow job state valid transitions: -->PREP PREP-->RUNNING|KILLED RUNNING-->SUSPENDED|SUCCEEDED|KILLED|FAILED SUSPENDED-->RUNNING|KILLED10 Workflow Jobs Recovery (re-run)Oozie must provide a mechanism by which a a failed workflow job can be resubmitted and executed starting after any action node that has completed its execution in the prior run. This is specially useful when the already executed action of a workflow job are too expensive to be re-executed.It is the responsibility of the user resubmitting the workflow job to do any necessary cleanup to ensure that the rerun will not fail due to not cleaned up data from the previous run.When starting a workflow job in recovery mode, the user must indicate either what workflow nodes in the workflow should be skipped or whether job should be restarted from the failed node. At any rerun, only one option should be selected. The workflow nodes to skip must be specified in theoozie.wf.rerun.skip.nodesjob configuration property, node names must be separated by commas. On the other hand, user needs to specifyoozie.wf.rerun.failnodesto rerun from the failed node. The value istrueorfalse. All workflow nodes indicated as skipped must have completed in the previous run. If a workflow node has not completed its execution in its previous run, and during the recovery submission is flagged as a node to be skipped, the recovery submission must fail.The recovery workflow job will run under the same workflow job ID as the original workflow job.To submit a recovery workflow job the target workflow job to recover must be in an end state (=SUCCEEDED=,FAILEDorKILLED).A recovery run could be done using a new worklfow application path under certain constraints (see next paragraph). This is to allow users to do a one off patch for the workflow application without affecting other running jobs for the same application.A recovery run could be done using different workflow job parameters, the new values of the parameters will take effect only for the workflow nodes executed in the rerun.The workflow application use for a re-run must match the execution flow, node types, node names and node configuration for all executed nodes that will be skipped during recovery. This cannot be checked by Oozie, it is the responsibility of the user to ensure this is the case.Oozie provides theint wf:run()EL function to returns the current run for a job, this function allows workflow applications to perform custom logic at workflow definition level (i.e. in adecisionnode) or at action node level (i.e. by passing the value of thewf:run()function as a parameter to the task).11 Oozie Web Services APISee theWeb Services APIpage.12 Client APIOozie provides a JavaClient APIthat allows to perform all common workflow job operations.The client API includes aLocalOozie classuseful for testing a workflow from within an IDE and for unit testing purposes.The Client API is implemented as a client of the Web Services API.13 Command Line ToolsOozie provides command line tool that allows to perform all common workflow job operations.The command line tool is implemented as a client of the Web Services API.14 Web UI ConsoleOozie provides a read-only Web based console that allows to allow to monitor Oozie system status, workflow applications status and workflow jobs status.The Web base console is implemented as a client of the Web Services API.15 Customizing Oozie with ExtensionsOut of the box Oozie provides support for a predefined set of action node types and Expression Language functions.Oozie provides a well defined API,Action executorAPI, to add support for additional action node types.Extending Oozie should not require any code change to the Oozie codebase. It will require adding the JAR files providing the new functionality and declaring them in Oozie system configuration.16 Workflow Jobs PriorityOozie does not handle workflow jobs priority. As soon as a workflow job is ready to do a transition, Oozie will trigger the transition. Workflow transitions and action triggering are assumed to be fast and lightweight operations.Oozie assumes that the remote systems are properly sized to handle the amount of remote jobs Oozie will trigger.Any prioritization of jobs in the remote systems is outside of the scope of Oozie.Workflow applications can influence the remote systems priority via configuration if the remote systems support it.17 HDFS Share Libraries for Workflow Applications (since Oozie 2.3)Oozie supports job and system share libraries for workflow jobs.Share libraries can simplify the deployment and management of common components across workflow applications.For example, if a workflow job uses a share library with the Streaming, Pig & Har JARs files it does not have to bundled those JARs files in the workflow applicationlib/path.If workflow job uses a share library, Oozie will include all the JAR/SO files in the library in the classpath/libpath for all its actions.A workflow job can specify a share library path using the job propertyoozie.libpath.A workflow job can use the system share library by setting the job propertyoozie.use.system.libpathtotrue.17.1 Action Share Library Override (since Oozie 3.3)Oozie share libraries are organized per action type, for example Pig action share library directory isshare/lib/pig/and Mapreduce Streaming share library direcotry isshare/library/mapreduce-streaming/.Oozie bundles a share library for specific versions of streaming, pig, hive, sqoop, distcp actions. These versions of streaming, pig, hive, sqoop and distcp have been tested and verified to work correctly with the version of Oozie that includes them.In addition, Oozie provides a mechanism to override the action share library JARs to allow using an alternate version of of the action JARs.This mechanism enables Oozie administrators to patch share library JARs, to include alternate versios of the share libraries, to provide acess to more than one version at the same time.The sharelibrary override is supported at server level and at job level. The share library name is resolved using the following precedence order: action.sharelib.for.#ACTIONTYPE# in the action configuration action.sharelib.for.#ACTIONTYPE# in the job configuration action.sharelib.for.#ACTIONTYPE# in the oozie server configuration action'sActionExecutor getDefaultShareLibName()method18 User-Retry for Workflow Actions (since Oozie 3.1)Oozie provides User-Retry capabilities when an action is inERRORorFAILEDstate.Depending on the nature of the failure, Oozie can define what type of errors allowed for User-Retry. There are certain errors Oozie is allowing for user-retry in default, for example, file-exists-errorFS009, FS008when using chmod in workflowfsaction, output-directory-exists-errorJA018in workflowmap-reduceaction, job-not-exists-errorJA017in action executor, FileNotFoundExceptionJA008in action executor, and IOExceptionJA009in action executor.User-Retry allows user to give certain number of reties (must not exceed system max retries), so user can find the causes of that problem and fix them when action is inUSER_RETRYstate. If failure or error does not go away after max retries, the action becomesFAILEDorERRORand Oozie marks workflow job toFAILEDorKILLED.Oozie adminstrator can allow more error codes to be handled for User-Retry. By adding this configuration =oozie.service.LiteWorkflowStoreService.user.retry.error.code.ext= tooozie.site.xmland error codes as value, these error codes will be considered as User-Retry after system restart.Examples of User-Retry in a workflow aciton is :

19 Global ConfigurationsOozie allows a global section to reduce the redundant job-tracker and name-node declarations for each action. The user can define aglobalsection in the beginning of the workflow.xml . The global section may contain the job-xml, configuration, job-tracker, or name-node that the user would like to set for every action. If a user then redefines one of these in a specific action node, Oozie will update use the specific declaration instead of the global one for that action.Example of a global element:

${job-tracker} ${namd-node} job1.xml mapred.job.queue.name ${queueName}

...20 Suspend On NodesSpecifyingoozie.suspend.on.nodesin a job.properties file lets users specify a list of actions that will cause Oozie to automatically suspend the workflow upon reaching any of those actions; like breakpoints in a debugger. To continue running the workflow, the user simply uses the-resume command from the Oozie CLI. Specifying a * will cause Oozie to suspend the workflow on all nodes.For example:oozie.suspend.on.nodes=mr-node,my-pig-action,my-forkSpecifying the above in a job.properties file will cause Oozie to suspend the workflow when any of those three actions are about to be executed.AppendixesAppendix A, Oozie XML-SchemaOozie Schema Version 0.4.5

Oozie Schema Version 0.4

---++++ Oozie Schema Version 0.3

---++++ Oozie Schema Version 0.2.5

---++++ Oozie Schema Version 0.2

Oozie SLA Version 0.1 Oozie SLA schema is supported in Oozie schema version 0.2

Oozie Schema Version 0.1

Appendix B, Workflow ExamplesFork and Join ExampleThe following workflow definition example executes 4 Map-Reduce jobs in 3 steps, 1 job, 2 jobs in parallel and 1 job.The output of the jobs in the previous step are use as input for the next jobs.Required workflow job parameters: jobtracker: JobTracker HOST:PORT namenode: NameNode HOST:PORT input: input directory output: output directory

${jobtracker} ${namenode} mapred.mapper.class org.apache.hadoop.example.IdMapper mapred.reducer.class org.apache.hadoop.example.IdReducer mapred.map.tasks 1 mapred.input.dir ${input} mapred.output.dir /usr/foo/${wf:id()}/temp1 ${jobtracker} ${namenode} mapred.mapper.class org.apache.hadoop.example.IdMapper mapred.reducer.class org.apache.hadoop.example.IdReducer mapred.map.tasks 1 mapred.input.dir /usr/foo/${wf:id()}/temp1 mapred.output.dir /usr/foo/${wf:id()}/temp2 ${jobtracker} ${namenode} mapred.mapper.class org.apache.hadoop.example.IdMapper mapred.reducer.class org.apache.hadoop.example.IdReducer mapred.map.tasks 1 mapred.input.dir /usr/foo/${wf:id()}/temp1 mapred.output.dir /usr/foo/${wf:id()}/temp3 ${jobtracker} ${namenode} mapred.mapper.class org.apache.hadoop.example.IdMapper mapred.reducer.class org.apache.hadoop.example.IdReducer mapred.map.tasks 1 mapred.input.dir /usr/foo/${wf:id()}/temp2,/usr/foo/${wf:id()}/temp3 mapred.output.dir ${output} Map/Reduce failed, error message[${wf:errorMessage()}]