Jstorm安裝與集羣環境搭建(storm集羣配置類似)

    上篇文章介紹了zookeeper集羣環境的搭建傳送門,接下來這篇文章主要介紹jstorm安裝與集羣環境的配置以及jstorm ui配置。
    jstorm最新版本爲2.2.1 下載地址傳送門。本文將採用192.168.72.140,141,142作爲zookeeper集羣服務器,192.168.72.151,152,153作爲jstorm集羣服務器,其中151作爲master,UI服務器,接下來進入本文主題部分。
一、環境準備
1.配置主機名和映射地址
151上執行

hostname jstorm-master
vim /etc/hosts
192.168.72.140 zookeeper-master
192.168.72.141 zookeeper-slave1
192.168.72.142 zookeeper-slave2
192.168.72.151 jstorm-master
192.168.72.152 jstorm-slave1
192.168.72.153 jstorm-slave2

2.在跟目錄創建jstorm文件夾,用於存放所有jstorm相關文件.

mkdir /jstorm

3.解壓jstorm並拷貝到jstorm文件夾下

cp -r jstorm-2.2.1 /jstorm/

4.在/jstorm/jstorm-2.2.1/目錄下創建jstorm_data目錄

mkdir /jstorm/jstorm-2.2.1/jstorm_data

5.配置jstorm環境變量

echo 'export JSTORM_HOME=/jstorm/jstorm-2.2.1' >> ~/.bashrc 
echo 'export PATH=$PATH:$JSTORM_HOME/bin' >> ~/.bashrc

重啓配置文件使之生效

source ~/.bashrc

6.創建jstorm_data文件夾用於保存運行時產生的數據

mkdir -p /jstorm/jstorm-2.2.1/jstorm_data

7.備份storm.yaml文件

cp /jstorm/jstorm-2.2.1/conf/storm.yaml /jstorm/jstorm-2.2.1/conf/storm.yaml.back

8.編輯storm.yaml文件

########### These MUST be filled in for a storm configuration
 storm.zookeeper.servers: 
     - "192.168.72.142"
     - "192.168.72.141"
     - "192.168.72.140"

 storm.zookeeper.root: "/jstorm"
 nimbus.host: "192.168.72.151"

# cluster.name: "default"

 #nimbus.host/nimbus.host.start.supervisor is being used by $JSTORM_HOME/bin/start.sh
 #it only support IP, please don't set hostname
 # For example
 # nimbus.host: "10.132.168.10, 10.132.168.45"
 #nimbus.host.start.supervisor: false
# %JSTORM_HOME% is the jstorm home directory
 storm.local.dir: "/jstorm/jstorm-2.2.1/jstorm_data"
 # please set absolute path, default path is JSTORM_HOME/logs
# jstorm.log.dir: "absolute path"

# java.library.path: "/usr/local/lib:/opt/local/lib:/usr/lib"

 nimbus.childopts: "-Xms1g -Xmx1g -Xmn512m -XX:SurvivorRatio=4 -XX:MaxTenuringThreshold=15 -XX:+UseConcMarkSweepGC  -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+HeapDumpOnOutOfMemoryError -XX:CMSMaxAbortablePrecleanTime=5000"  

# if supervisor.slots.ports is null, 
# the port list will be generated by cpu cores and system memory size 
# for example, 
# there are cpu_num = system_physical_cpu_num/supervisor.slots.port.cpu.weight
# there are mem_num = system_physical_memory_size/(worker.memory.size * supervisor.slots.port.mem.weight) 
# The final port number is min(cpu_num, mem_num)
# supervisor.slots.ports.base: 6800
# supervisor.slots.port.cpu.weight: 1.2
# supervisor.slots.port.mem.weight: 0.7
# supervisor.slots.ports: null
 supervisor.slots.ports:
    - 6800
    - 6801
    - 6802
    - 6803

# Default disable user-define classloader
# If there are jar conflict between jstorm and application, 
# please enable it 
# topology.enable.classloader: false

# enable supervisor use cgroup to make resource isolation
# Before enable it, you should make sure:
#   1. Linux version (>= 2.6.18)
#   2. Have installed cgroup (check the file's existence:/proc/cgroups)
#   3. You should start your supervisor on root
# You can get more about cgroup:
#   http://t.cn/8s7nexU
# supervisor.enable.cgroup: false


### Netty will send multiple messages in one batch  
### Setting true will improve throughput, but more latency
# storm.messaging.netty.transfer.async.batch: true

### default worker memory size, unit is byte
# worker.memory.size: 2147483648

# Metrics Monitor
# topology.performance.metrics: it is the switch flag for performance 
# purpose. When it is disabled, the data of timer and histogram metrics 
# will not be collected.
# topology.alimonitor.metrics.post: If it is disable, metrics data
# will only be printed to log. If it is enabled, the metrics data will be
# posted to alimonitor besides printing to log.
# topology.performance.metrics: true
# topology.alimonitor.metrics.post: false

# UI MultiCluster
# Following is an example of multicluster UI configuration
 ui.clusters:
     - {
         name: "jstorm",
         zkRoot: "/jstorm",
         zkServers:
               [ "192.168.72.140","192.168.72.141","192.168.72.142"],
         zkPort: 2181,
       }

說明:其中需要修改的配置爲

 storm.zookeeper.servers: 
     - "192.168.72.142"
     - "192.168.72.141"
     - "192.168.72.140"

上面配置的是zookeeper服務器

supervisor.slots.ports:
     - 6800
     - 6801
     - 6802
     - 6803

標識配置jstorm的一些端口,一般爲默認,只需要打開註釋即可。
jstorm主服務位置配置

nimbus.host: "192.168.72.151"
 nimbus.childopts: "-Xms1g -Xmx1g -Xmn512m -XX:SurvivorRatio=4 -XX:MaxTenuringThreshold=15 -XX:+UseConcMarkSweepGC  -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+HeapDumpOnOutOfMemoryError -XX:CMSMaxAbortablePrecleanTime=5000"

配置JVM否則在啓動UI的時候會報內存方面錯誤

ui.clusters:
     - {
         name: "jstorm",
         zkRoot: "/jstorm",
         zkServers:
               [ "192.168.72.140","192.168.72.141","192.168.72.142"],
         zkPort: 2181,
       }

這段代碼是用於配置 Jstorm UI監控,只需要在UI服務器上進行配置,在該示例中151作爲jstorm nimbus和UI的服務器,slave無需配置上面這段代碼。執行到這一步jstorm集羣配置基本完畢(從節點).
對於主節點還需要執行一下腳本,而且每次變更都需要執行第二句話

mkdir ~/.jstorm
cp -f $JSTORM_HOME/conf/storm.yaml ~/.jstorm
二、啓動Jstorm UI
將jstorm目錄下jstrom-ui-2.2.1.war拷貝到自己tomcat下
執行

mkdir ~/.jstorm
cp -f $JSTORM_HOME/conf/storm.yaml ~/.jstorm

啓動tomcat,看到如下界面說明UI配置成功
UI配置成功
看到下面界面說明Jstorm集羣配置成功
Jstorm集羣配置成功

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章