【Oozie】oozie學習筆記

Oozie英文翻譯爲:馴象人。一個基於工作流引擎的開源框架,由Cloudera公司貢獻給Apache,提供對Hadoop
Mapreduce、Pig Jobs的任務調度與協調。Oozie需要部署到Java Servlet容器中運行。主要用於定時調度任務,多任務可以按照執行的邏輯順序調度。

功能

Oozie是一個管理Hdoop作業(job)的工作流程調度管理系統
Oozie的工作流是一系列動作的直接週期圖(DAG)
Oozie協調作業就是通過時間(頻率)和有效數據觸發當前的Oozie工作流程
Oozie是Yahoo針對Apache Hadoop開發的一個開源工作流引擎。用於管理和協調運行在Hadoop平臺上(包括:HDFS、Pig和MapReduce)的Jobs。Oozie是專爲雅虎的全球大規模複雜工作流程和數據管道而設計
Oozie圍繞兩個核心:工作流和協調器,前者定義任務的拓撲和執行邏輯,後者負責工作流的依賴和觸發

模塊

  1. Workflow:順序執行流程節點,支持fork(分支多個節點),join(合併多個節點爲一個)

  2. Coordinator:定時觸發workflow

  3. Bundle Job:綁定多個Coordinator

常用節點

  1. 控制流節點(Control Flow Nodes):控制流節點一般都是定義在工作流開始或者結束的位置,比如start,end,kill等。以及提供工作流的執行路徑機制,如decision,fork,join等。

  2. 動作節點(Action Nodes):負責執行具體動作的節點,比如:拷貝文件,執行某個Shell腳本等等。

修改配置

core-site.xml

[hadoop@datanode1 hadoop]$ vim core-site.xml
<configuration>
<!-- 指定HDFS中NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://datanode1:9000</value>
</property>
<!-- 指定hadoop運行時產生文件的存儲目錄 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/data</value>
</property>
<property>
<name>fs.trash.interval </name>
<value>60</value>
</property>
<!-- Oozie Server的Hostname -->
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>

<!-- 允許被Oozie代理的用戶組 -->
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
</configuration>

hadoop.proxyuser.admin.hosts類似屬性中的hadoop用戶替換成你的hadoop用戶。因爲我的用戶名就是hadoop

yarn-site.xml

[hadoop@datanode1 hadoop]$ vim yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

<property>
<name>yarn.resourcemanager.hostname</name>
<value>datanode2</value>
</property>

<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>

<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>86400</value>
</property>

<!-- 任務歷史服務 -->
<property>
<name>yarn.log.server.url</name>
<value>http://datanode1:19888/jobhistory/logs/</value>
</property>
</configuration>

mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- 配置 MapReduce JobHistory Server 地址 ,默認端口10020 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>datanode1:10020</value>
</property>
<!-- 配置 MapReduce JobHistory Server web ui 地址, 默認端口19888 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>datanode1:19888</value>
</property>
</configuration>

不要忘記同步到其他集羣 然後namenode -for mate 執行初始化

部署 Oozie

oozie根目錄下解壓hadooplibs

tar -zxf oozie-hadooplibs-4.0.0-cdh5.3.6.tar.gz -C ../

在Oozie根目錄下創建libext目錄

mkdir libext/

拷貝依賴Jar包

cp -ra hadooplibs/hadooplib-2.5.0-cdh5.3.6.oozie-4.0.0-cdh5.3.6/* libext/

上傳Mysql驅動包到libext目錄下

上傳ext-2.2.zip拷貝到libext目錄下

修改oozie-site.xml

屬性:oozie.service.JPAService.jdbc.driver
屬性值:com.mysql.jdbc.Driver
解釋:JDBC的驅動

屬性:oozie.service.JPAService.jdbc.url
屬性值:jdbc:mysql://datanode1:3306/oozie
解釋:oozie所需的數據庫地址

屬性:oozie.service.JPAService.jdbc.username
屬性值:root
解釋:數據庫用戶名

屬性:oozie.service.JPAService.jdbc.password
屬性值:123456
解釋:數據庫密碼

屬性:oozie.service.HadoopAccessorService.hadoop.configurations
屬性值:*=/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/etc/hadoop
解釋:讓Oozie引用Hadoop的配置文件

在Mysql中創建Oozie的數據庫

mysql -uroot -p123456
mysql> create database oozie;

初始化Oozie

bin/oozie-setup.sh sharelib create -fs hdfs://datanode1:9000 -locallib oozie-sharelib-4.0.0-cdh5.3.6-yarn.tar.gz
創建oozie.sql文件
bin/oozie-setup.sh db create -run -sqlfile oozie.sql
打包項目,生成war包
bin/oozie-setup.sh prepare-war

需要zip命令 最小化安裝可能需要

Oozie服務

 bin/oozied.sh start
//如需正常關閉Oozie服務,請使用:
bin/oozied.sh stop

Web頁面

FOQD2T.png

Oozie任務

調度shell

1.解壓官方模板

tar -zxf oozie-examples.tar.gz

2.創建工作目錄

mkdir oozie-apps/

3.拷貝任務模板

cp -r examples/apps/shell/ oozie-apps/

4.shell腳本

#!/bin/bash
i=1
mkdir /home/hadoop/oozie-test1
cd /home/hadoop/oozie-test1
for(( i=1;i<=100;i++ ))
do
d=$( date +%Y-%m-%d\ %H\:%M\:%S )
echo "data:$d $i">>/home/hadoop/oozie-test1/logs.log
done

5.job.properties

nameNode=hdfs://datanode1:9000
jobTracker=datanode2:8032
queueName=shell
examplesRoot=oozie-apps

oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/shell
EXEC=p1.sh

6.workflow.xml

<workflow-app xmlns="uri:oozie:workflow:0.4" name="shell-wf">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>${EXEC}</exec>
<file>/user/hadoop/oozie-apps/shell/${EXEC}#${EXEC}</file>
<capture-output/>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
<decision name="check-output">
<switch>
<case to="end">
${wf:actionData('shell-node')['my_output'] eq 'Hello Oozie'}
</case>
<default to="fail-output"/>
</switch>
</decision>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<kill name="fail-output">
<message>Incorrect output, expected [Hello Oozie] but was [${wf:actionData('shell-node')['my_output']}]</message>
</kill>
<end name="end"/>
</workflow-app>

7.上傳任務配置

/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put  -f  oozie-apps/ /user/hadoop

8.執行任務

bin/oozie job -oozie http://datanode1:11000/oozie -config oozie-apps/shell/job.properties -run

9.殺死任務

bin/oozie job -oozie http://datanode1:11000/oozie -kill 0000004-170425105153692-oozie-z-W

FOlpQS.png

調度邏輯shell

在原有的基礎上進行適當修改

1.job.properties

nameNode=hdfs://datanode1:9000
jobTracker=datanode2:8032
queueName=shell
examplesRoot=oozie-apps

oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/shell
EXEC1=p1.sh
EXEC2=p2.sh

2.腳本 p1.sh

#!/bin/bash               
mkdir /home/hadoop/Oozie2_test_p1
cd /home/hadoop/Oozie2_test_p1
i=1
for(( i=1;i<=100;i++ ))
do
d=$( date +%Y-%m-%d\ %H\:%M\:%S )
echo "data:$d $i">>/home/hadoop/Oozie2_test_p1/Oozie2_p1.log
done

2.腳本 p2.sh

#!/bin/bash
mkdir /home/hadoop/Oozie2_test_p1
cd /home/hadoop/Oozie2_test_p1
i=1
for(( i=1;i<=100;i++ ))
do
d=$( date +%Y-%m-%d\ %H\:%M\:%S )
echo "data:$d $i">>/home/hadoop/Oozie2_test_p1/Oozie2_p1.log
done

3.workflow.xml

<workflow-app xmlns="uri:oozie:workflow:0.4" name="shell-wf">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>${EXEC1}</exec>
<file>/user/hadoop/oozie-apps/shell/${EXEC1}#${EXEC1}</file>
<capture-output/>
</shell>
<ok to="p2-shell-node"/>
<error to="fail"/>
</action>

<action name="p2-shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>${EXEC2}</exec>
<file>/user/hadoop/oozie-apps/shell/${EXEC2}#${EXEC2}</file>
<!-- <argument>my_output=Hello Oozie</argument>-->
<capture-output/>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>

<decision name="check-output">
<switch>
<case to="end">
${wf:actionData('shell-node')['my_output'] eq 'Hello Oozie'}
</case>
<default to="fail-output"/>
</switch>
</decision>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<kill name="fail-output">
<message>Incorrect output, expected [Hello Oozie] but was [${wf:actionData('shell-node')['my_output']}]</message>
</kill>
<end name="end"/>
</workflow-app>

調度MapReduce

前提:確定YARN可用

1.拷貝官方模板到oozie-apps

[hadoop@datanode1 lib]$ cp /opt/module/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0-cdh5.3.6.jar ./

2.配置job.properties

nameNode=hdfs://datanode1:9000
jobTracker=datanode2:8032
queueName=map-reduce
examplesRoot=oozie-apps

oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/map-reduce/workflow.xml
outputDir=/output

3.workflow.xml

<workflow-app xmlns="uri:oozie:workflow:0.2" name="map-reduce-wf">
<start to="mr-node"/>
<action name="mr-node">
<map-reduce>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="/output"/>
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
<!-- 配置調度MR任務時,使用新的API -->
<property>
<name>mapred.mapper.new-api</name>
<value>true</value>
</property>

<property>
<name>mapred.reducer.new-api</name>
<value>true</value>
</property>
<!-- 指定Job Key輸出類型 -->
<property>
<name>mapreduce.job.output.key.class</name>
<value>org.apache.hadoop.io.Text</value>
</property>
<!-- 指定Job Value輸出類型 -->
<property>
<name>mapreduce.job.output.value.class</name>
<value>org.apache.hadoop.io.IntWritable</value>
</property>
<!-- 指定Map類 -->
<property>
<name>mapreduce.job.map.class</name>
<value>org.apache.hadoop.examples.WordCount$TokenizerMapper</value>
</property>
<!-- 指定Reduce類 -->
<property>gt;
<name>mapreduce.job.reduce.class</name>
<value>org.apache.hadoop.examples.WordCount$IntSumReducer</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>1</value>
</property>
<property>
<name>mapred.input.dir</name>
<value>/input</value>
</property>
<property>
<name>mapred.output.dir</name>
<value>/_output</value>
</property>
</configuration>
</map-reduce>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>

4.拷貝jar包

[hadoop@datanode1 lib]$ cp /opt/module/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0-cdh5.3.6.jar ./

5.上傳任務配置

/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put -f oozie-apps /user/hadoop/oozie-apps

6.執行任務

[hadoop@datanode1 oozie-4.0.0-cdh5.3.6]$  bin/oozie job -oozie http://datanode1:11000/oozie -config oozie-apps/map-reduce/job.properties -run

7.查看結果

[hadoop@datanode1 module]$ /opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -cat /input/*.txt
19/01/10 19:13:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
I
Love
Hadoop
and
Sopark
I
Love
BigData
and
AI
[hadoop@datanode1 module]$ /opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -cat /_output/p*
19/01/10 19:13:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
AI 1
BigData 1
Hadoop 1
I 2
Love 2
Sopark 1
and 2

調度定時任務/循環任務

前提:

##檢查系統當前時區: 
date -R
##注意這裏,如果顯示的時區不是+0800,你可以刪除localtime文件夾後,再關聯一個正確時區的鏈接過去,命令如下:
rm -rf /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

ntp配置

vim /etc/ntp.conf

主機配置
F0EkqI.png

從機配置
F0mrcD.md.png

從節點同步時間

service ntpd restart
chkconfig ntpd on # 開機啓動
ntpdate -u datanode1
crontab -e
* */1 * * * /usr/sbin/ntpdate datanode1 #每一小時同步一次 注意 要用root創建

1.配置oozie-site.xml文件

屬性:oozie.processing.timezone
屬性值:GMT+0800
解釋:修改時區爲東八區區時

2.修改js框架代碼

 vi /opt/module/cdh/oozie-4.0.0-cdh5.3.6/oozie-server/webapps/oozie/oozie-console.js
修改如下:
function getTimeZone() {
Ext.state.Manager.setProvider(new Ext.state.CookieProvider());
return Ext.state.Manager.get("TimezoneId","GMT+0800");
}

3.重啓oozie服務,並重啓瀏覽器(一定要注意清除緩存)

bin/oozied.sh stop
bin/oozied.sh start

4.拷貝官方模板配置定時任務

cp -r examples/apps/cron/ oozie-apps/

5.修改job.properties

nameNode=hdfs://datanode1:9000
jobTracker=datanode2:8032
queueName=cronTask
examplesRoot=oozie-apps

oozie.coord.application.path=${nameNode}/user/${user.name}/${examplesRoot}/cron
start=2019-01-10T21:40+0800
end=2019-01-10T22:00+0800
workflowAppUri=${nameNode}/user/${user.name}/${examplesRoot}/cron

EXEC3=p3.sh

6.修改coordinator.xml 注意${coord:minutes(5)}的5是最小值不能比5再小了

<coordinator-app name="cron-coord" frequency="${coord:minutes(5)}" start="${start}" end="${end}" timezone="GMT+0800"
xmlns="uri:oozie:coordinator:0.2">
<action>
<workflow>
<app-path>${workflowAppUri}</app-path>
<configuration>
<property>
<name>jobTracker</name>
<value>${jobTracker}</value>
</property>
<property>
<name>nameNode</name>
<value>${nameNode}</value>
</property>
<property>
<name>queueName</name>
<value>${queueName}</value>
</property>
</configuration>
</workflow>
</action>
</coordinator-app>

7.創建腳本

#!/bin/bash
d=$( date +%Y-%m-%d\ %H\:%M\:%S )
echo "data:$d $i">>/home/hadoop/Oozie3_p3.log

8.修改

<workflow-app xmlns="uri:oozie:workflow:0.5" name="one-op-wf">
<start to="p3-shell-node"/>
<action name="p3-shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>${EXEC3}</exec>
<file>/user/hadoop/oozie-apps/cron/${EXEC3}#${EXEC3}</file>
<!-- <argument>my_output=Hello Oozie</argument>-->
<capture-output/>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<kill name="fail-output">
<message>Incorrect output, expected [Hello Oozie] but was [${wf:actionData('shell-node')['my_output']}]</message>
</kill>
<end name="end"/>
</workflow-app>

9.提交配置

/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put oozie-apps/cron/ /user/hadoop/oozie-apps

10.提交任務

bin/oozie job -oozie http://datanode1:11000/oozie -config oozie-apps/cron/job.properties -run

FOxtXD.png

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章