CDH 6.2.0 或 6.3.0 安裝實戰及官方文檔資料鏈接

Download CDH 6.2.0  |  Detailed CDH 6.2.x Installation Guide  |   Cloudera Manager 6.2.0  |   CDH 6.2.x Download
Download CDH 6.3.0  |  Detailed CDH 6.3.x Installation Guide  |   Cloudera Manager 6.3.0  |   CDH 6.3.x Download

目錄


常見的 JDK 有 Oracle JDK、和 Open JDK,而常用到的 Open JDK有 Linux yum 版的 Open JDK、Zulu JDKGraalVM CE JDK等。安裝 CDH 環境的 JDK 時還是建議先使用官方提供的下載資源列表裏的 oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm。如果公司有要求必須使用Open JDK,可以先安裝下載包中的Oracle JDK 安裝,等CDH安裝完畢,需要的組件服務安裝配置完成之後,再升級爲自己需公司要求的 Open JDK,且強烈建議這樣做
Supported JDKs
CDH升級JDK可以參考我的另一篇博客 CDH-5.16之 JDK 版本升級爲 Open JDK1.8
cdh-java-home

對於包中個組件的版本可查看 CDH 6.2.0 Packaging,或者鏡像庫中查看 noarch | x86_64注意:從cdh6.0開始,Hadoop的版本升級到了3.0
在這裏插入圖片描述



1. 環境準備

這裏是CDH對環境的要求:Cloudera Enterprise 6 Requirements and Supported Versions

1.1 環境清除或重裝

假設舊環境已經安裝過CDH或HDP或者其它,需要清除這些,清除相對麻煩些,刪除的時候需謹慎。但整體可以這樣快速清除

  • 1 獲取root用戶
  • 2 卸除通過rmp安裝的服務:
    rpm -qa 
    # 或者指定某些服務
    rpm -qa 'cloudera-manager-*'
    
    # 移除服務
    rpm -e 上面命令查到的名字 --nodeps
    
    # 清除yum 緩存
    sudo yum clean all
    
  • 3 查看進程:ps -aux > ps.txt,通過第一列USER 和 最後一列COMMAND 確定是否爲清除的進程,如果是那麼可根據第二列PID kill掉該服務:kill -9 pid號
  • 4 查看系統的用戶信息:cat /etc/passwd
  • 5 刪除多餘的用戶:userdel 用戶名
  • 6 搜索這個用戶相關的的文件,並刪除:
     find / -name 用戶名*
     # 刪除查到的文件
     rm -rf 文件
    
  • 7 在刪除時有時可能文件被佔用,可以先通過 lsof 命令找到被佔用的進程,關閉後再刪除。
  • 8 雖然查不到進程被佔用,可能文件被掛載了,卸除後再刪除: umount cm-5.16.1/run/cloudera-scm-agent/process
    然後再:rm -rf cm-5.16.1/。當卸除命令執行後還是不能刪除時,可以多運行幾次,再嘗試刪除文件。

1.2 Apache HTTP 服務安裝

因爲有些服務器對訪問外網有嚴格限制時,可以配置一個 HTTP服務,將下載的資源上傳上去,方便後面的安裝。

Step 1:先查Apache http服務狀態

如果狀態可查到,只需要修改配置文件(查看Step 3),重啓服務就行。如果狀態查詢失敗,需要先安裝 Apache HTTP 服務(接着Step 2)。

sudo systemctl start httpd

Step 2:安裝 Apache http服

 yum -y install httpd

Step 3:修改 Apache http 配置

配置如下內容。最後保存退出。可以查看到配置的文檔路徑爲:/var/www/html。其他的配置項可以默認,也可以根據情況修改。

vim /etc/httpd/conf/httpd.conf
 
 # 大概在 119 行
DocumentRoot "/var/www/html"

# 大概在 131 行,<Directory "/var/www/html"> </Directory>標籤內對索引目錄樣式設置
# http://httpd.apache.org/docs/2.4/en/mod/mod_autoindex.html#indexoptions
# 最多顯示100個字符,utf-8字符集,開啓目錄瀏覽修飾,目錄優先排序
IndexOptions NameWidth=100 Charset=UTF-8 FancyIndexing FoldersFirst

配置完後,記得重啓服務。

Step 4:創建資源路徑

sudo mkdir -p /var/www/html/cloudera-repos

1.3 Host 配置

將集羣的Host的ip和域名配置到每臺機器的/etc/hosts
注意 hostname必須是一個FQDN(全限定域名),例如myhost-1.example.com,否則後面轉到頁面時在啓動Agent時有個驗證會無法通過

# cdh1
sudo hostnamectl set-hostname cdh1.example.com
# cdh2
sudo hostnamectl set-hostname cdh2.example.com
# cdh3
sudo hostnamectl set-hostname cdh3.example.com

#配置 /etc/hosts
192.168.33.3 cdh1.example.com cdh1
192.168.33.6 cdh2.example.com cdh2
192.168.33.9 cdh3.example.com cdh3


# 配置 /etc/sysconfig/network 
# cdh1
HOSTNAME=cdh1.example.com
# cdh2
HOSTNAME=cdh2.example.com
# cdh3
HOSTNAME=cdh3.example.com

1.4 NTP

NTP服務在集羣中是非常重要的服務,它是爲了保證集羣中的每個節點的時間在同一個頻道上的服務。如果集羣內網有時間同步服務,只需要在每個節點配置上NTP客戶端配置,和時間同步服務同步實際就行,但如果沒有時間同步服務,那就需要我們配置NTP服務。

規劃如下,當可以訪問時間同步服務,例如可以直接和亞洲NTP服務進行同步。例如不能訪問時,可以將cdh1.example.com配置爲NTP服務端。集羣內節點和這個服務進行時間同步。

ip 用途
asia.pool.ntp.org 亞洲NTP時間服務地址
cdh1.example.com ntpd服務,以本地時間爲準
cdh2.example.com ntpd客戶端。與ntpd服務同步時間
cdh3.example.com ntpd客戶端。與ntpd服務同步時間

step1 ntpd service

# NTP服務,如果沒有先安裝
systemctl status ntpd.service

step2 與系統時間一起同步

非常重要 硬件時間與系統時間一起同步。修改配置文件vim /etc/sysconfig/ntpd。末尾新增代碼SYNC_HWCLOCK=yes

# Command line options for ntpd
#OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
OPTIONS="-g"
SYNC_HWCLOCK=yes

step3 添加NTP服務列表

編輯vim /etc/ntp/step-tickers

# List of NTP servers used by the ntpdate service.

#0.centos.pool.ntp.org
cdh1.example.com

step4 NTP服務端ntp.conf

修改ntp配置文件vim /etc/ntp.conf

driftfile /var/lib/ntp/drift
logfile /var/log/ntp.log
pidfile   /var/run/ntpd.pid
leapfile  /etc/ntp.leapseconds
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
#允許任何IP的客戶端進行時間同步,但不允許修改NTP服務端參數,default類似於0.0.0.0
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
#restrict 10.135.3.58 nomodify notrap nopeer noquery
#允許通過本地迴環接口進行所有訪問
restrict 127.0.0.1
restrict  -6 ::1
# 允許內網其他機器同步時間。網關和子網掩碼。注意有些集羣的網關可能比較特殊,可以用下面的命令獲取這部分的信息
# 查看網關信息:/etc/sysconfig/network-scripts/ifcfg-網卡名;route -n、ip route show  
restrict 192.168.33.2 mask 255.255.255.0 nomodify notrap
# 允許上層時間服務器主動修改本機時間
#server asia.pool.ntp.org minpoll 4 maxpoll 4 prefer
# 外部時間服務器不可用時,以本地時間作爲時間服務
server  127.127.1.0     # local clock
fudge   127.127.1.0 stratum 10

step5 NTP客戶端ntp.conf

driftfile /var/lib/ntp/drift
logfile /var/log/ntp.log
pidfile   /var/run/ntpd.pid
leapfile  /etc/ntp.leapseconds
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict -6 ::1
server 192.168.33.3 iburst

step6 NTP服務重啓和同步

#重啓服務
systemctl restart ntpd.service
#開機自啓
chkconfig ntpd on

ntpq -p
#ntpd -q -g 
#ss -tunlp | grep -w :123
#手動觸發同步
#ntpdate -uv cdh1.example.com
ntpdate -u  cdh1.example.com

# 查看同步狀態。需要過一段時間,查看狀態會變成synchronised
ntpstat
timedatectl
ntptime

step7 NTP服務狀態查看

如果顯示如下則同步是正常的狀態(狀態顯示 PLL,NANO):

[root@cdh2 ~]# ntptime
ntp_gettime() returns code 0 (OK)
  time e0b2b842.b180f51c  Fri, Apr 19 2019  11:09:20.333, (.693374110),
  maximum error 27426 us, estimated error 0 us, TAI offset 0
ntp_adjtime() returns code 0 (OK)
  modes 0x0 (),
  offset 0.000 us, frequency 3.932 ppm, interval 1 s,
  maximum error 27426 us, estimated error 0 us,
  status 0x2001 (PLL,NANO),
  time constant 6, precision 0.001 us, tolerance 500 ppm,

或者使用timedatectl命令查看(如果顯示 NTP synchronized: yes,表示同步成功):

[root@cdh2 ~]#  timedatectl
      Local time: Fri 2019-04-19 11:09:20 CST
  Universal time: Fri 2019-04-19 11:09:20 UTC
        RTC time: Fri 2019-04-19 11:09:20
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: no
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a

1.5 MySQL

Download MySQL

step1 配置環境變量

# 配置Mysql環境變量
export PATH=$PATH:/usr/local/mysql/bin

step2 創建用戶和組

#①建立一個mysql的組 
groupadd mysql
#②建立mysql用戶,並且把用戶放到mysql組 
useradd -r -g mysql mysql
#③還可以給mysql用戶設置一個密碼(mysql)。回車設置mysql用戶的密碼
passwd mysql 
#④修改/usr/local/mysql 所屬的組和用戶
chown -R mysql:mysql /usr/local/mysql/

step3 設置MySQL配置文件

編輯/etc/my.cnf文件:vim /etc/my.cnf。設置爲如下:

[mysqld]
basedir = /usr/local/mysql
datadir = /usr/local/mysql/data
port = 3306
socket=/var/lib/mysql/mysql.sock
character-set-server=utf8
 
transaction-isolation = READ-COMMITTED
# Disabling symbolic-links is recommended to prevent assorted security risks;
# to do so, uncomment this line:
symbolic-links = 0

server_id=1
max-binlog-size = 500M
log_bin=/var/lib/mysql/mysql_binary_log
#binlog_format = mixed
binlog_format = Row
expire-logs-days = 14

max_connections = 550
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M

# InnoDB settings
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit  = 2
innodb_log_buffer_size = 64M
innodb_buffer_pool_size = 4G
innodb_thread_concurrency = 8
innodb_flush_method = O_DIRECT
innodb_log_file_size = 512M
  
[client]
default-character-set=utf8
socket=/var/lib/mysql/mysql.sock
  
[mysql]
default-character-set=utf8
socket=/var/lib/mysql/mysql.sock
 
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
 
sql_mode=STRICT_ALL_TABLES

step4 配置解壓和設置

# 解壓到 /usr/local/ 下
tar -zxf mysql-5.7.27-el7-x86_64.tar.gz -C /usr/local/
# 重命名
mv /usr/local/mysql-5.7.27-el7-x86_64/ /usr/local/mysql
 
# 實現mysqld -install這樣開機自動執行效果
cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysql
vim /etc/init.d/mysql
# 添加如下配置
basedir=/usr/local/mysql
datadir=/usr/local/mysql/data
 
#創建存放socket文件的目錄
mkdir -p /var/lib/mysql
chown mysql:mysql /var/lib/mysql
#添加服務mysql 
chkconfig --add mysql 
# 設置mysql服務爲自動
chkconfig mysql on 

step5 開始安裝

#初始化mysql。注意記錄下臨時密碼: ?w=HuL-yV05q
/usr/local/mysql/bin/mysqld --initialize --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data
#給數據庫加密
/usr/local/mysql/bin/mysql_ssl_rsa_setup --datadir=/usr/local/mysql/data
 
# 啓動mysql服務。過段時間,當不再刷屏時,按Ctrl + C退出後臺進程
/usr/local/mysql/bin/mysqld_safe --user=mysql & 
# 重啓MySQL服務
/etc/init.d/mysql restart 
#查看mysql進程 
ps -ef|grep mysql 

step6 登陸MySQL,完成後續設置

#第一次登陸mysql數據庫,輸入剛纔的那個臨時密碼
/usr/local/mysql/bin/mysql -uroot -p

輸入前面一步生成的臨時密碼,進入MySQL的命令行,其中密碼密碼可以訪問☌隨機密碼生成☌網站生成安全強度較高的隨機密碼,生產環境一般是有強度要求。

--必須先修改密碼
mysql> set password=password('V&0XkVpHZwkCEdY$');

--在mysql中添加一個遠程訪問的用戶 
mysql> use mysql; 
mysql> select host,user from user; 
-- 添加一個遠程訪問用戶scm,並設置其密碼爲 U@P3uXBSmAe%kQh^
mysql> grant all privileges on *.* to 'scm'@'%' identified by '*YPGT$%GqA' with grant option; 
--刷新配置
mysql> flush privileges;

1.6 剩下

這部分安裝我們都比較熟悉,可以自行先將這些先安裝完成。下面以 Centos 7.4 爲例安裝 CDH 6.2.0 。

注意 Mysql的配置文件/etc/my.cnf請參考Configuring and Starting the MySQL Server,進行配置。

1.7 其他

其他更詳細的可以閱讀CDH官方文檔:


2. 下載資源

如果服務器無法下載,對步驟2.12.2選其一種方式在本地將如下資源下載後,上傳到Apache HTTP 服務器上的目錄:/var/www/html/cloudera-repos

這裏分享兩種方式,一種是基礎包版,將最基礎的包下載到本地;一種是完全版,相當於把官方的鏡像庫拉取到本地。2.12.2選擇一種方式下載即可,推薦第一種方式,直下載基礎的包進行快速部署安裝,後期parcel或者cdh組件升級時,再下載對應的包,然後進行後續的升級即可。

2.1 基礎包版下載

其中將下載資源上傳到搭建的 Apache HTTP 服務節點,如果文件夾不存在,需要手動創建。記得文件路徑有足夠的權限:

# 注意文件的權限
chmod 555 -R /var/www/html/cloudera-repos

2.1.1 下載 parcel 包

 wget -b https://archive.cloudera.com/cdh6/6.2.0/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373-el7.parcel			
 wget https://archive.cloudera.com/cdh6/6.2.0/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373-el7.parcel.sha1		
 wget https://archive.cloudera.com/cdh6/6.2.0/parcels/manifest.json

將下載的包上傳到 /var/www/html/cloudera-repos/cdh6/6.2.0/parcels

cdh 6.3.0的parcel 包訪問:https://archive.cloudera.com/cdh6/6.3.0/parcels/

2.1.2 下載需要的rpm包

 wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPMS/x86_64/cloudera-manager-agent-6.2.0-968826.el7.x86_64.rpm
 wget -b https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPMS/x86_64/cloudera-manager-daemons-6.2.0-968826.el7.x86_64.rpm
 wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPMS/x86_64/cloudera-manager-server-6.2.0-968826.el7.x86_64.rpm
 wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPMS/x86_64/cloudera-manager-server-db-2-6.2.0-968826.el7.x86_64.rpm
 wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPMS/x86_64/enterprise-debuginfo-6.2.0-968826.el7.x86_64.rpm
 wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPMS/x86_64/oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm

將下載的包上傳到 /var/www/html/cloudera-repos/cm6/6.2.0/redhat7/yum/RPMS/x86_64

其它版本的可以訪問Cloudera Manager頁面,例如CDH 6.3.0:https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/

2.1.3 獲取 cloudera-manager 其他資源

2.1.3.1 獲取cloudera-manager.repo

將下面下載的包上傳到 /var/www/html/cloudera-repos/cm6/6.2.0/redhat7/yum

wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPM-GPG-KEY-cloudera
wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/cloudera-manager.repo

2.1.3.2 獲取allkeys.asc

將下面下載的包上傳到 /var/www/html/cloudera-repos/cm6/6.2.0

wget https://archive.cloudera.com/cm6/6.2.0/allkeys.asc
mv allkeys.asc /var/www/html/cloudera-repos/cm6/6.2.0

2.1.3.3 初始化repodata

進入到Apache HTTP服務器的:/var/www/html/cloudera-repos/cm6/6.2.0/redhat7/yum/目錄下,然後執行

#yum repolist
# 如果沒有安裝 createrepo,請 yum 安裝 createrepo
yum -y install createrepo
cd /var/www/html/cloudera-repos/cm6/6.2.0/redhat7/yum/
# 創建repodata
createrepo .

2.1.4 下載數據庫驅動

這裏保存元數據的數據庫選用Mysql,因此需要下載Mysql數據庫驅動,如果選用的其他數據,請詳細閱讀安裝和配置數據庫

並將下載的驅動壓縮包解壓,或獲得mysql-connector-java-5.1.46-bin.jar,記得務必將其名字改爲mysql-connector-java.jar

wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.46.tar.gz
# 解壓
tar zxvf mysql-connector-java-5.1.46.tar.gz
# 重命名mysql-connector-java-5.1.46-bin.jar爲mysql-connector-java.jar,並放到/usr/share/java/下
mv mysql-connector-java-5.1.46-bin.jar /usr/share/java/mysql-connector-java.jar
# 同時發送到其它節點
scp /usr/share/java/mysql-connector-java.jar root@cdh2:/usr/share/java/
scp /usr/share/java/mysql-connector-java.jar root@cdh3:/usr/share/java/

*2.2 完整鏡像版下載

2.2.1 下載 parcel files

 cd /var/www/html/cloudera-repos
 sudo wget --recursive --no-parent --no-host-directories https://archive.cloudera.com/cdh6/6.2.0/parcels/ -P /var/www/html/cloudera-repos
 sudo wget --recursive --no-parent --no-host-directories https://archive.cloudera.com/gplextras6/6.2.0/parcels/ -P /var/www/html/cloudera-repos
 sudo chmod -R ugo+rX /var/www/html/cloudera-repos/cdh6
 sudo chmod -R ugo+rX /var/www/html/cloudera-repos/gplextras6

2.2.2 下載 Cloudera Manager

sudo wget --recursive --no-parent --no-host-directories https://archive.cloudera.com/cm6/6.2.0/redhat7/ -P /var/www/html/cloudera-repos
sudo wget https://archive.cloudera.com/cm6/6.2.0/allkeys.asc -P /var/www/html/cloudera-repos/cm6/6.2.0/
sudo chmod -R ugo+rX /var/www/html/cloudera-repos/cm6

2.2.3 下載數據庫驅動

2.1.4 下載數據庫驅動

2.3 設置安裝節點的 cloudera-manager yum信息

假設通過上面,已經將需要的資源下載下來,並以上傳到服務器可以訪問的HTTP服務了。

2.3 .1 下載

一下連接中的${cloudera-repos.http.host} 請更換爲自己的Apache HTTP服務的IP。

wget http://${cloudera-repos.http.host}/cloudera-repos/cm6/6.2.0/redhat7/yum/cloudera-manager.repo -P /etc/yum.repos.d/
# 導入存儲庫簽名GPG密鑰:
sudo rpm --import http://${cloudera-repos.http.host}/cloudera-repos/cm6/6.2.0/redhat7/yum/RPM-GPG-KEY-cloudera

2.3.2 修改

修改 cloudera-manager.repo。執行命令:vim /etc/yum.repos.d/cloudera-manager.repo,修改爲如下(注意,原先的https一定要改爲http)

[cloudera-manager]
name=Cloudera Manager 6.2.0
baseurl=http://${cloudera-repos.http.host}/cloudera-repos/cm6/6.2.0/redhat7/yum/
gpgkey=http://${cloudera-repos.http.host}/cloudera-repos/cm6/6.2.0/redhat7/yum/RPM-GPG-KEY-cloudera
gpgcheck=1
enabled=1
autorefresh=0
type=rpm-md

2.3.3 更新yum

#清除 yum 緩存
sudo yum clean all
#更新yum
sudo yum update

3. 安裝

經過前面的準備,這裏進入到正式的安裝過程。

3.1 安裝 Cloudera Manager

  • 在 Server 端 執行

    sudo yum install -y cloudera-manager-daemons cloudera-manager-agent cloudera-manager-server
    
  • 在 Agent 端 執行

    sudo yum install -y cloudera-manager-agent cloudera-manager-daemons
    
  • 在安裝完後,程序會自動在server節點上創建一個如下文件或文件夾:

     /etc/cloudera-scm-agent/config.ini
     /etc/cloudera-scm-server/
     /opt/cloudera
    ……
    
  • 爲了後面安裝的更快速,將下載的CDH包裹放到這裏(僅Server端執行):

    cd /opt/cloudera/parcel-repo/
    wget http://${cloudera-repos.http.host}/cloudera-repos/cdh6/6.2.0/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373-el7.parcel
    wget http://${cloudera-repos.http.host}/cloudera-repos/cdh6/6.2.0/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373-el7.parcel.sha1
    wget http://${cloudera-repos.http.host}/cloudera-repos/cdh6/6.2.0/parcels/manifest.json
    # 在manifest.json中找到對應版本的密鑰(大概在755行),複製到*.sha文件中
    # 一般CDH-6.2.0-1.cdh6.2.0.p0.967373-el7.parcel.sha1文件的內容和parcel密鑰是一致的,只需重命名即可
    echo "e9c8328d8c370517c958111a3db1a085ebace237"  > CDH-6.2.0-1.cdh6.2.0.p0.967373-el7.parcel.sha
    #echo "d6e1483e47e3f2b1717db8357409865875dc307e"  > CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.sha
    #修改屬主屬組。
    chown cloudera-scm:cloudera-scm /opt/cloudera/parcel-repo/*
    
    
  • 修改agent配置文件,將Cloudera Manager Agent 配置爲指向 Cloudera Manager Serve。
    這裏主要是配置 Agent節點的 config.ini 文件。

    vim /etc/cloudera-scm-agent/config.ini
    #配置如下項
    # Hostname of the CM server. 運行Cloudera Manager Server的主機的名稱
    server_host=cdh1.example.com
    # Port that the CM server is listening on. 運行Cloudera Manager Server的主機上的端口
    server_port=7182
    #1位啓用爲代理使用 TLS 加密,如果前面沒有設置,一定不要開啓TLS
    #use_tls=1
    

3.2 設置 Cloudera Manager 數據庫

Cloudera Manager Server包含一個可以數據庫prepare的腳本,主要是使用這個腳本完成對數據庫的相關配置進行初始化,這裏不對元數據庫中的表進行創建。

3.2.1 創建 Cloudera 軟件對應的數據庫:

這一步主要是創建 Cloudera 軟件所需要的數據庫,否則當執行後面一步的監本時會報如下錯誤

[                          main] DbCommandExecutor              INFO  Able to connect to db server on host 'localhost' but not able to find or connect to database 'scm'.
[                          main] DbCommandExecutor              ERROR Error when connecting to database.
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'scm'
……

Cloudera 軟件對應的數據庫列表如下:
Databases for Cloudera Software
如果只是先安裝 Cloudera Manager Server ,就如上圖,只需要創建scm的數據庫,如果要安裝其他服務請順便也把數據庫創建好。

# 登陸 Mysql後執行如下命令
CREATE DATABASE scm DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

# 順便把其他的數據庫也創建
CREATE DATABASE amon DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE rman DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
# Hive、Impala等元數據庫
CREATE DATABASE metastore DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE sentry DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE nav DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE navms DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE oozie DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

3.2.2 初始化數據庫

初始化數據庫時,主要使用的scm_prepare_database.sh腳本。腳本的語法如下

# options參數可以執行 scm_prepare_database.sh --help獲取
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh [options] <databaseType> <databaseName> <databaseUser> <password>

初始化 scm 數據庫配置。這一步會在/etc/cloudera-scm-server/db.properties更新配置(如果驅動找不到,請確認/usr/share/java是否有mysql-connector-java.jar)。

[root@cdh1 ~]#  sudo /opt/cloudera/cm/schema/scm_prepare_database.sh -h localhost  mysql scm scm scm
JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
Verifying that we can write to /etc/cloudera-scm-server
Creating SCM configuration file in /etc/cloudera-scm-server
Executing:  /usr/local/zulu8/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:/opt/cloudera/cm/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
[                          main] DbCommandExecutor              INFO  Successfully connected to database.
All done, your SCM database is configured correctly!

參數說明:

  • options 指定操作,如果數據庫不再本地請用-h--host 指定mysql的host,不指定默認爲localhost
  • databaseType 指定爲mysql,也可以是其它類型的數據,例如:oracle等
  • databaseName 指定爲scm數據庫,這裏使用 scm庫
  • databaseUser 指定mysql用戶名,這裏使用 scm
  • password 指定mysql其用戶名的密碼,這裏使用scm

這一步如果我們用的是自己配置的JDK可能會報如下的錯誤:

[root@cdh1 java]# sudo /opt/cloudera/cm/schema/scm_prepare_database.sh -h cdh1 mysql scm scm scm
JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64
Verifying that we can write to /etc/cloudera-scm-server
Creating SCM configuration file in /etc/cloudera-scm-server
Executing:  /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:/opt/cloudera/cm/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
[                          main] DbCommandExecutor              ERROR Error when connecting to database.
java.sql.SQLException: java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre/lib/tzdb.dat (No such file or directory)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:964)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:897)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:886)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:860)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:877)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:873)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:443)
        at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:389)
        at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:330)
        at java.sql.DriverManager.getConnection(DriverManager.java:664)
        at java.sql.DriverManager.getConnection(DriverManager.java:247)
        at com.cloudera.enterprise.dbutil.DbCommandExecutor.testDbConnection(DbCommandExecutor.java:263)
        at com.cloudera.enterprise.dbutil.DbCommandExecutor.main(DbCommandExecutor.java:139)
Caused by: java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre/lib/tzdb.dat (No such file or directory)
        at sun.util.calendar.ZoneInfoFile$1.run(ZoneInfoFile.java:261)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.util.calendar.ZoneInfoFile.<clinit>(ZoneInfoFile.java:251)
        at sun.util.calendar.ZoneInfo.getTimeZone(ZoneInfo.java:589)
        at java.util.TimeZone.getTimeZone(TimeZone.java:560)
        at java.util.TimeZone.setDefaultZone(TimeZone.java:666)
        at java.util.TimeZone.getDefaultRef(TimeZone.java:636)
        at java.util.GregorianCalendar.<init>(GregorianCalendar.java:591)
        at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:706)
        at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
        ... 6 more
Caused by: java.io.FileNotFoundException: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre/lib/tzdb.dat (No such file or directory)
        at java.io.FileInputStream.open0(Native Method)
        at java.io.FileInputStream.open(FileInputStream.java:195)
        at java.io.FileInputStream.<init>(FileInputStream.java:138)
        at sun.util.calendar.ZoneInfoFile$1.run(ZoneInfoFile.java:255)
        ... 20 more
[                          main] DbCommandExecutor              ERROR Exiting with exit code 4
--> Error 4, giving up (use --force if you wish to ignore the error)

解決辦法,打開執行的腳本/opt/cloudera/cm/schema/scm_prepare_database.sh 在108行local JAVA8_HOME_CANDIDATES=()方法中將自己配置的JAVA_HOME填入:

  local JAVA8_HOME_CANDIDATES=(
  	'/usr/java/jdk1.8.0_181-cloudera'
    '/usr/java/jdk1.8'
    '/usr/java/jre1.8'
    '/usr/lib/jvm/j2sdk1.8-oracle'
    '/usr/lib/jvm/j2sdk1.8-oracle/jre'
    '/usr/lib/jvm/java-8-oracle'
  )

3.3 安裝CDH和其他軟件

只需要在 Cloudera Manager Server 端啓動 server即可,Agent 在進入Web頁面後徐步驟中會自動幫我們啓動。

3.3.1 啓動Cloudera Manager Server

sudo systemctl start cloudera-scm-server

查看啓動結果

sudo systemctl status cloudera-scm-server

如果要觀察啓動過程可以在 Cloudera Manager Server 主機上運行以下命令:

sudo tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log
# 當您看到此日誌條目時,Cloudera Manager管理控制檯已準備就緒:
# INFO WebServerImpl:com.cloudera.server.cmf.WebServerImpl: Started Jetty server.

如果日誌有問題,可以根據提示解決。比如:

2019-06-13 16:33:19,148 ERROR WebServerImpl:com.cloudera.server.web.cmf.search.components.SearchRepositoryManager: No read permission to the server storage directory [/var/lib/cloudera-scm-server/search]
2019-06-13 16:33:19,148 ERROR WebServerImpl:com.cloudera.server.web.cmf.search.components.SearchRepositoryManager: No write permission to the server storage directory [/var/lib/cloudera-scm-server/search]
……
2019-06-13 16:33:19,637 ERROR WebServerImpl:org.springframework.web.servlet.DispatcherServlet: Context initialization failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'reportsController': Unsatisfied dependency expressed through field 'viewFactory'; nested exception is org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'viewFactory': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!)
……
Caused by: org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'viewFactory': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!)
……
================================================================================
Starting SCM Server. JVM Args: [-Dlog4j.configuration=file:/etc/cloudera-scm-server/log4j.properties, -Dfile.encoding=UTF-8, -Duser.timezone=Asia/Shanghai, -Dcmf.root.logger=INFO,LOGFILE, -Dcmf.log.dir=/var/log/cloudera-scm-server, -Dcmf.log.file=cloudera-scm-server.log, -Dcmf.jetty.threshhold=WARN, -Dcmf.schema.dir=/opt/cloudera/cm/schema, -Djava.awt.headless=true, -Djava.net.preferIPv4Stack=true, -Dpython.home=/opt/cloudera/cm/python, -XX:+UseConcMarkSweepGC, -XX:+UseParNewGC, -XX:+HeapDumpOnOutOfMemoryError, -Xmx2G, -XX:MaxPermSize=256m, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/tmp, -XX:OnOutOfMemoryError=kill -9 %p], Args: [], Version: 6.2.0 (#968826 built by jenkins on 20190314-1704 git: 16bbe6211555460a860cf22d811680b35755ea81)
Server failed.
java.lang.NoClassDefFoundError: Could not initialize class sun.util.calendar.ZoneInfoFile
	at sun.util.calendar.ZoneInfo.getTimeZone(ZoneInfo.java:589)
	at java.util.TimeZone.getTimeZone(TimeZone.java:560)
	at java.util.TimeZone.setDefaultZone(TimeZone.java:666)
	at java.util.TimeZone.getDefaultRef(TimeZone.java:636)
	at java.util.Date.normalize(Date.java:1197)
	at java.util.Date.toString(Date.java:1030)
	at java.lang.String.valueOf(String.java:2994)
	at java.lang.StringBuilder.append(StringBuilder.java:131)
	at org.springframework.context.support.AbstractApplicationContext.toString(AbstractApplicationContext.java:1367)
	at java.lang.String.valueOf(String.java:2994)
	at java.lang.StringBuilder.append(StringBuilder.java:131)
	at org.springframework.context.support.AbstractApplicationContext.prepareRefresh(AbstractApplicationContext.java:583)
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:512)
	at org.springframework.context.access.ContextSingletonBeanFactoryLocator.initializeDefinition(ContextSingletonBeanFactoryLocator.java:143)
	at org.springframework.beans.factory.access.SingletonBeanFactoryLocator.useBeanFactory(SingletonBeanFactoryLocator.java:383)
	at com.cloudera.server.cmf.Main.findBeanFactory(Main.java:481)
	at com.cloudera.server.cmf.Main.findBootstrapApplicationContext(Main.java:472)
	at com.cloudera.server.cmf.Main.bootstrapSpringContext(Main.java:375)
	at com.cloudera.server.cmf.Main.<init>(Main.java:260)
	at com.cloudera.server.cmf.Main.main(Main.java:233)
================================================================================

修改文件權限問題,修改時區。如果問題不能解決,請更換爲Oracle JDK
時區問題可以在/opt/cloudera/cm/bin/cm-server文件中,大概第40行添加CMF_OPTS="$CMF_OPTS -Duser.timezone=Asia/Shanghai"
cloudera-scm-server-timezone

如果提示如下錯誤,請刪除/var/lib/cloudera-scm-agent/cm_guid的guid。

[15/Jun/2019 13:54:55 +0000] 24821 MainThread agent        ERROR    Error, CM server guid updated, expected 198b7045-53ce-458a-9c0a-052d0aba8a22, received ea04f769-95c8-471f-8860-3943bfc8ea7b

*(可選,如果需要)實例化一個新的cloudera-scm-server,需重啓

uuidgen > /etc/cloudera-scm-server/uuid

3.3.2 轉到 Web 瀏覽器

在Web瀏覽器數據 http://<server_host>:7180,其中<server_host> 是運行Cloudera Manager Server的主機的FQDN或IP地址。

登錄Cloudera Manager Admin Console,默認憑證爲

  • Username: admin
  • Password: admin

登陸用戶名後,顯示如下頁面,根據提示進行安裝即可:歡迎 -> Accept License -> Select Edition。
cdh-web-01.png
cdh-web-02

這一步選擇安裝的版本,不同版本支持的主要功能已列出,第一列爲Cloudera免費的快速體驗版;第二列爲Cloudera爲企業級試用版(免費試用60天);第三列是功能和服務最全的Cloudera企業版,是需要認證且收費的。Cloudera Express 和 Cloudera Enterprise 中的可用功能的完整列表
cdh-web-03
選擇第一列,快速體驗版服務,完全免費,功能和服務對於需求不是很特殊和複雜的試用基本沒什麼問題,如果後期功能不夠,或滿足不了需求,想使用Cloudera的企業版也不用擔心,在Cloudera Manager頁面,點擊頁面頭部的管理菜單,在下拉列表中單機許可證,可在頁面上選擇:試用Cloudera Enterprise 60天升級至Cloudera Enterprise ,更詳細的升級說明可查看 從Cloudera Express升級到Cloudera Enterprise ➹

選擇第二列,可以直接免費體驗Cloudera Enterprise全部功能60天,且這個每次只能試用一次,關於許可證到期或試用許可證的說明可訪問 Managing Licenses ➹

選擇第三列,使用Cloudera企業版,需要獲取許可證,要獲得Cloudera Enterprise許可證,請填寫此表單 或致電866-843-7207。關於許可證的詳細說明可以訪問Managing Licenses。其功能和價格可參考 功能和價格 頁面。



集羣安裝

cdh-web-04.png

  • 歡迎
  • Cluster Basics:給集羣設置一個名字。
    * Specify Hosts:輸入集羣的主機名,多個換行添加,例如:
  • 這裏需要重點注意的是,這個地址一定是符合FQDN(全限定域名)規範的,否則在Agents安裝時會有驗證,
    cdh1.example.com
    cdh2.example.com
    cdh3.example.com
    
    如果root的密碼不一樣怎麼辦?,除了可以找管理員將root密碼統一改爲一樣的,也可以這樣解決,以單節點方式安裝,Web頁面到達集羣設置選擇安裝組件服務時,另起一個頁面,進入Cloudera Manager,選擇 主機 -> 所有主機 -> Add Hosts ,根據提示依次將其它節點添加到這個集羣名字下,中間輸入每個機器的root的密碼完成驗證即可。

Host驗證

  • 選擇存儲庫:可以設置自定義存儲庫(即安裝的http://${cloudera-repos.http.host}/cloudera-repos/cm6/6.2.0),等。
  • JDK 安裝選項:如果環境已經安裝了,則不用勾選,直接繼續。
  • 提供 SSH 登錄憑據:輸入Cloudera Manager主機的賬號,用root,輸入密碼。
  • Install Agents:這一步會啓動集羣中的Agent節點的Agent服務。cdh-web-05.png
  • Install Parcelscdh-web-06.png
  • Inspect Cluster:點擊檢查NetWork和Host,然後繼續。cdh-web-07.png
    如果這一步有提示如下的錯誤(這裏引用的是CDH 6.3.0的頁面):
    在這裏插入圖片描述
⚠️警告1 的處理
Cloudera 建議將 /proc/sys/vm/swappiness 設置爲最大值 10。當前設置爲 30。使用 sysctl 命令在運行時更改該設置並編輯 /etc/sysctl.conf,以在重啓後保存該設置。
您可以繼續進行安裝,但 Cloudera Manager 可能會報告您的主機由於交換而運行狀況不良。以下主機將受到影響: 

處理:

sysctl vm.swappiness=10
# 這裏我們的修改已經生效,但是如果我們重啓了系統,又會變成原先的值
echo 'vm.swappiness=10'>> /etc/sysctl.conf
⚠️警告2 的處理
已啓用透明大頁面壓縮,可能會導致重大性能問題。請運行“echo never > /sys/kernel/mm/transparent_hugepage/defrag”和
“echo never > /sys/kernel/mm/transparent_hugepage/enabled”以禁用此設置,
然後將同一命令添加到 /etc/rc.local 等初始化腳本中,以便在系統重啓時予以設置。以下主機將受到影響: 

處理:

echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled

# 然後將命令添加到初始化腳本中
vi /etc/rc.local
# 添加如下
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled

使用嚮導設置羣集

  • Select Services:這裏可以先選擇基本服務Essentials,後期再添加服務。如果是數倉類型可以選擇第三個。cdh-web-08.png
  • 自定義角色分配:對選取的組件進行分配。cdh-web-09.png
  • 數據庫設置
服務 主機名稱 數據庫 用戶名 密碼
Hive cdh1.yore.com metastore scm *YPGT$%GqA
Activity Monitor cdh1.yore.com amon scm *YPGT$%GqA
Oozie Server cdh1.yore.com oozie scm *YPGT$%GqA
Hue cdh1.yore.com hue scm *YPGT$%GqA

如果數據庫的密碼忘記了怎麼辦,多數是可以根據你的感覺去try的,也可以直接查看/etc/cloudera-scm-server/db.properties文件。
cdh-web-10.png

  • 審覈更改cdh-web-11.png
  • 命令詳細信息:這一步如果是數據庫的問題,就把對應的庫刪除,重新創建。cdh-web-12.png
  • 彙總cdh-web-13.png

4. 其他問題

4.1 Error starting NodeManager

發生如下異常:

2019-06-16 12:19:25,932 WARN org.apache.hadoop.service.AbstractService: When stopping the service NodeManager : java.lang.NullPointerException
java.lang.NullPointerException
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStop(NodeManager.java:483)
	at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:222)
	at org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:54)
	at org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:104)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:869)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:942)
2019-06-16 12:19:25,932 ERROR org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/LOCK: Permission denied
	at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:173)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:281)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:354)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:869)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:942)
Caused by: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/LOCK: Permission denied
	at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
	at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
	at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
	at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.openDatabase(NMLeveldbStateStoreService.java:1517)
	at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.initStorage(NMLeveldbStateStoreService.java:1504)
	at org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.serviceInit(NMStateStoreService.java:342)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
	... 5 more

查看每個NodeManager節點的/var/lib

cd /var/lib
ls -l | grep -i hadoop

發現有一個節點信息如下:

[root@cdh1 lib]#  ls -l | grep -i hadoop
drwxr-xr-x   3          996          992 4096 Apr 25 14:39 hadoop-hdfs
drwxr-xr-x   2 cloudera-scm cloudera-scm 4096 Apr 25 13:50 hadoop-httpfs
drwxr-xr-x   2 sentry       sentry       4096 Apr 25 13:50 hadoop-kms
drwxr-xr-x   2 flume        flume        4096 Apr 25 13:50 hadoop-mapreduce
drwxr-xr-x   4 solr         solr         4096 Apr 25 14:40 hadoop-yarn

而其他節點爲:

[root@cdh2 lib]#  ls -l | grep -i hadoop
drwxr-xr-x  3 hdfs         hdfs         4096 Jun 16 06:04 hadoop-hdfs
drwxr-xr-x  3 httpfs       httpfs       4096 Jun 16 06:04 hadoop-httpfs
drwxr-xr-x  2 mapred       mapred       4096 Jun 16 05:06 hadoop-mapreduce
drwxr-xr-x  4 yarn         yarn         4096 Jun 16 06:07 hadoop-yarn

所以執行如下,修改有問題的那個節點對應文件的歸屬和權限。重啓有問題的節點的NodeManager

chown  -R hdfs:hdfs /var/lib/hadoop-hdfs
chown  -R httpfs.httpfs /var/lib/hadoop-httpfs
chown  -R kms.kms /var/lib/hadoop-kms
chown  -R mapred:mapred /var/lib/hadoop-mapreduce
chown  -R yarn:yarn /var/lib/hadoop-yarn
chmod -R 755 /var/lib/hadoop-*

4.2 Could not open file in log_dir /var/log/catalogd: Permission denied

查看日誌如下異常信息:

+ exec /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/impala/../../bin/catalogd --flagfile=/var/run/cloudera-scm-agent/process/173-impala-CATALOGSERVER/impala-conf/catalogserver_flags
Could not open file in log_dir /var/log/catalogd: Permission denied

……

+ exec /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/impala/../../bin/statestored --flagfile=/var/run/cloudera-scm-agent/process/175-impala-STATESTORE/impala-conf/state_store_flags
Could not open file in log_dir /var/log/statestore: Permission denied

以執行如下,修改有問題的那個節點對應文件的歸屬和權限。重啓有問題的節點的對應的服務

cd /var/log
ls -l /var/log | grep -i catalogd
# 在`Imapala Catalog Server`節點執行
chown  -R impala:impala /var/log/catalogd
# 在`Imapala StateStore`節點
chown  -R impala:impala /var/log/statestore

4.3 Cannot connect to port 2049

CONF_DIR=/var/run/cloudera-scm-agent/process/137-hdfs-NFSGATEWAY
CMF_CONF_DIR=
unlimited
Cannot connect to port 2049.
using /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/bigtop-utils as JSVC_HOME

NFS Gateway節點啓動rpcbind

# 查看各節點的 NFS服務狀態
 systemctl status nfs-server.service
# 如果沒有就安裝
 yum -y install nfs-utils 
 
# 查看 rpcbind 服務狀態
 systemctl status rpcbind.service
# 如果沒有啓動,則啓動 rpcbind
 systemctl start rpcbind.service

4.4 Kafka不能創建Topic

當我們將Kafka組件安裝成功之後,我們創建一個Topic,發現創建失敗:

[root@cdh2 lib]# kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic canal
Error while executing topic command : Replication factor: 1 larger than available brokers: 0.
19/06/16 23:27:30 ERROR admin.TopicCommand$: org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0.

此時可以登陸zkCli.sh 查看Kafka的zNode信息,發現一切正常,ids都在,後臺程序創建的Topic Name也在,但就是無法用命令查看。

此時可以先將Zookeeper和Kafka都重啓一下,再嘗試,如果依舊不行,將Kafka在Zookeeper的zNode目錄設置爲根節點。然後重啓,再次創建和查看,發現現在Kafka正常了。

4.5 Hive組件安裝和啓動時驅動找不到

有時明明驅動包已經放置到了/usr/share/java/ 下,可能會依然報如下錯誤

+ [[ -z /opt/cloudera/cm ]]
+ JDBC_JARS_CLASSPATH='/opt/cloudera/cm/lib/*:/usr/share/java/mysql-connector-java.jar:/opt/cloudera/cm/lib/postgresql-42.1.4.jre7.jar:/usr/share/java/oracle-connector-java.jar'
++ /usr/java/jdk1.8.0_181-cloudera/bin/java -Djava.net.preferIPv4Stack=true -cp '/opt/cloudera/cm/lib/*:/usr/share/java/mysql-connector-java.jar:/opt/cloudera/cm/lib/postgresql-42.1.4.jre7.jar:/usr/share/java/oracle-connector-java.jar' com.cloudera.cmf.service.hive.HiveMetastoreDbUtil /var/run/cloudera-scm-agent/process/32-hive-metastore-create-tables/metastore_db_py.properties unused --printTableCount
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
	at com.cloudera.cmf.service.hive.HiveMetastoreDbUtil.countTables(HiveMetastoreDbUtil.java:203)
	at com.cloudera.cmf.service.hive.HiveMetastoreDbUtil.printTableCount(HiveMetastoreDbUtil.java:284)
	at com.cloudera.cmf.service.hive.HiveMetastoreDbUtil.main(HiveMetastoreDbUtil.java:334)
Caused by: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:264)
	at com.cloudera.enterprise.dbutil.SqlRunner.open(SqlRunner.java:180)
	at com.cloudera.enterprise.dbutil.SqlRunner.getDatabaseName(SqlRunner.java:264)
	at com.cloudera.cmf.service.hive.HiveMetastoreDbUtil.countTables(HiveMetastoreDbUtil.java:197)
	... 2 more
+ NUM_TABLES='[                          main] SqlRunner                      ERROR Unable to find the MySQL JDBC driver. Please make sure that you have installed it as per instruction in the installation guide.'
+ [[ 1 -ne 0 ]]
+ echo 'Failed to count existing tables.'
+ exit 1

把驅動拷貝一份到Hive的lib下

# 驅動包不管你是什麼版本,它的名字一定叫 mysql-connector-java.jar
#務必將驅動包賦予足夠的權限
chmod 755 /usr/share/java/mysql-connector-java.jar
#ln -s /usr/share/java/mysql-connector-java.jar /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hive/lib/mysql-connector-java.jar
ln -s /usr/share/java/mysql-connector-java.jar /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hive/libmysql-connector-java.jar

4.6 Hive組件啓動提示獲取VERSION失敗

查看日誌如果提示metastore獲取VERSION失敗,可以查看Hive的元數據庫hive庫下是否有元數據表,如果沒有,手動將表初始化到Mysql的hive庫下:

# 查找Hive元數據初始化的sql腳本,會發現搜到了各種版本的sql腳本
find / -name hive-schema*mysql.sql
# 例如可以得到:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hive/scripts/metastore/upgrade/mysql/hive-schema-2.1.1.mysql.sql

# 登陸Mysql數據庫
mysql -u root -p
> use hive;
> source /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hive/scripts/metastore/upgrade/mysql/hive-schema-2.1.1.mysql.sql

這一步就初始化了Hive的元數據表,然後重啓Hive實例的服務。

4.7 Impala時區問題設置

Impala不進行設置,獲取的日期類型的數據時區是有八個小時的時差,因此最好設置一下。

Cloudera Manager Web頁面  >  Impala  >  配置  >  搜索:Impala Daemon 命令行參數高級配置代碼段(安全閥)  >  添加 -use_local_tz_for_unix_timestamp_conversions=true

保存配置,並重啓Impala。

4.8 hdfs用戶登錄不上

當HDFS開啓了權限認證,有時操作HDFS需要切換到hdfs用戶對數據進行操作,但可能會提示如下問題:

[root@cdh1 ~]# su hdfs
This account is currently not available.

此時查看系統的用戶信息,將hdfs/sbin/nologin改爲/bin/bash,然後保存,再次登錄hdfs即可。

[root@cdh1 ~]# cat /etc/passwd | grep hdfs
hdfs:x:954:961:Hadoop HDFS:/var/lib/hadoop-hdfs:/sbin/nologin

#將上面的信息改爲如下
 hdfs:x:954:961:Hadoop HDFS:/var/lib/hadoop-hdfs:/bin/bash

4.9 NTP問題

查看角色日誌詳細信息發現:

Check failed: _s.ok() Bad status: Runtime error: Cannot initialize clock: failed to wait for clock sync using command '/usr/bin/chronyc waitsync 60 0 0 1': /usr/bin/chronyc: process exited with non-zero status 1

在服務器上運行ntptime 命令的信息如下,說明NTP存在問題

[root@cdh3 ~]# ntptime
ntp_gettime() returns code 5 (ERROR)
  time e0b2b833.5be28000  Tue, Jun 18 2019  9:09:07.358, (.358925),
  maximum error 16000000 us, estimated error 16000000 us, TAI offset 0
ntp_adjtime() returns code 5 (ERROR)
  modes 0x0 (),
  offset 0.000 us, frequency 9.655 ppm, interval 1 s,
  maximum error 16000000 us, estimated error 16000000 us,
  status 0x40 (UNSYNC),
  time constant 10, precision 1.000 us, tolerance 500 ppm,

特別要注意一下輸出中的重要部分(us - 微妙):

  • maximum error 16000000 us:這個時間誤差爲16s,已經高於Kudu要求的最大誤差
  • status 0x40 (UNSYNC):同步狀態,此時時間已經不同步了;如果爲status 0x2001 (PLL,NANO)時則爲健康狀態。

正常的信息如下:

[root@cdh1 ~]# ntptime
ntp_gettime() returns code 0 (OK)
  time e0b2b842.b180f51c  Tue, Jun 18 2019  9:09:22.693, (.693374110),
  maximum error 27426 us, estimated error 0 us, TAI offset 0
ntp_adjtime() returns code 0 (OK)
  modes 0x0 (),
  offset 0.000 us, frequency 3.932 ppm, interval 1 s,
  maximum error 27426 us, estimated error 0 us,
  status 0x2001 (PLL,NANO),
  time constant 6, precision 0.001 us, tolerance 500 ppm,

如果是UNSYNC狀態,請查看服務器的NTP服務狀態:systemctl status ntpd.service,如果沒有配置NTP服務的請安裝,可以參考1.4 NTP部分安裝和配置。這部分介紹還可以查看文檔NTP Clock Synchronization,或者其他文檔。

4.10 root用戶對HDFS文件系統操作權限不夠問題

# 1 在Linux執行如下命令增加 supergroup
groupadd supergroup

# 2 如將用戶root增加到 supergroup 中
usermod -a -G supergroup root

# 3 同步系統的權限信息到HDFS文件系統
sudo -u hdfs hdfs dfsadmin -refreshUserToGroupsMappings

# 4 查看屬於 supergroup 用戶組的用戶
grep 'supergroup:' /etc/group

4.11 安裝組件的其他異常

如果前面都沒問題體,在安裝組件最常見的失敗異常,就是文件的角色和權限問題,請參照4.2方式排查和修復。多查看對應的日誌,根據日誌信息解決異常。



PO 一張最後安裝完成的CDH Web 頁面

Cloudera Manager Admin 頁面如下:
在這裏插入圖片描述

5 API 的方式查看服務狀態和重啓服務

官方 API 的地址爲:

當我們不方便訪問 Cloudera Manager Admin 管理頁面的時候,有時需要查看服務或者如果服務停止需要重啓的時候可以通過 API 方式。下面假設管理頁面的管理員賬號爲 admin,密碼爲 admin,cloudera-scm-server 服務在 cdh1,主要以 Kudu 服務爲例,其它服務類似。

5.1 查看集羣所有主機信息

# -u 指定用戶名和密碼
curl -u admin:admin 'http://cdh1:7180/api/v1/hosts' 
# 返回一個 json 數據,格式如下

可以看到 items 中列出了當前集羣所有的主機IP、主機名 和 hostId 等信息,後面有些角色在某個節點的服務標識會用到 hostId 值。

{
  "items" : [ {
    "hostId" : "ecf4247c-xxxx-438e-b026-d77becff1fbe",
    "ipAddress" : "192.168.xxx.xx",
    "hostname" : "cdh1.yore.com",
    "rackId" : "/default",
    "hostUrl" : "http://cdh1.yore.com:7180/cmf/hostRedirect/ecf4247c-xxxx-438e-b026-d77becff1fbe"
  }, {
    "hostId" : "6ce8ae83-xxxx-46e1-a47a-96201681a019",
    "ipAddress" : "192.168.xxx.xx",
    "hostname" : "cdh2.yore.com",
    "rackId" : "/default",
    "hostUrl" : "http://cdh1.yore.com:7180/cmf/hostRedirect/6ce8ae83-xxxx-46e1-a47a-96201681a019"
  }, {
    "hostId" : "9e512856-xxxx-4608-8891-0573cdc68bee",
    "ipAddress" : "192.168.xxx.xx",
    "hostname" : "cdh3.yore.com",
    "rackId" : "/default",
    "hostUrl" : "http://cdh1.yore.com:7180/cmf/hostRedirect/9e512856-xxxx-4608-8891-0573cdc68bee"
  } ]
}

5.2 查看集羣名

curl -u admin:admin 'http://cdh1:7180/api/v1/clusters'

name 就是其中的集羣名,下面請求接口會用到這個值

{
  "items" : [ {
    "name" : "yore-cdh-test",
    "version" : "CDH6"
  } ]
}

5.3 查看集羣下服務

curl -u admin:admin 'http://cdh1:7180/api/v1/clusters/yore-cdh-test/services' 

其它服務已省略,本次重點關注其中 Kudu 服務,一般 name 就是組件的服務,例如 Apache Kudu 的 name 爲 kudu,

{
  "items": [
    {
      "healthChecks": [
        {
          "name": "HIVE_HIVEMETASTORES_HEALTHY",
          "summary": "GOOD"
        },
        {
          "name": "HIVE_HIVESERVER2S_HEALTHY",
          "summary": "GOOD"
        },
        {
          "name": "HIVE_WEBHCATS_HEALTHY",
          "summary": "GOOD"
        }
      ],
      "name": "hive",
      "type": "HIVE",
      "clusterRef": {
        "clusterName": "yore-cdh-test"
      },
      "serviceUrl": "http://cdh1.yore.com:7180/cmf/serviceRedirect/hive",
      "serviceState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    },
    {
      "healthChecks": [],
      "name": "kudu",
      "type": "KUDU",
      "clusterRef": {
        "clusterName": "yore-cdh-test"
      },
      "serviceUrl": "http://cdh1.yore.com:7180/cmf/serviceRedirect/kudu",
      "serviceState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    },
    {
      "healthChecks": [
        {
          "name": "IMPALA_CATALOGSERVER_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "IMPALA_IMPALADS_HEALTHY",
          "summary": "GOOD"
        },
        {
          "name": "IMPALA_STATESTORE_HEALTH",
          "summary": "GOOD"
        }
      ],
      "name": "impala",
      "type": "IMPALA",
      "clusterRef": {
        "clusterName": "yore-cdh-test"
      },
      "serviceUrl": "http://cdh1.yore.com:7180/cmf/serviceRedirect/impala",
      "serviceState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    },
    {
      "healthChecks": [
        {
          "name": "HUE_HUE_SERVERS_HEALTHY",
          "summary": "GOOD"
        },
        {
          "name": "HUE_LOAD_BALANCER_HEALTHY",
          "summary": "GOOD"
        }
      ],
      "name": "hue",
      "type": "HUE",
      "clusterRef": {
        "clusterName": "yore-cdh-test"
      },
      "serviceUrl": "http://cdh1.yore.com:7180/cmf/serviceRedirect/hue",
      "serviceState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    }
  ]
}

5.4 查看 指定服務的狀態信息

curl -u admin:admin 'http://cdh1:7180/api/v1/clusters/yore-cdh-test/services/kudu'

通過這裏可以看到 kudu 服務的啓動信息啓動狀態(STARTED),健康狀態信息良好(GOOD),

{
  "healthChecks": [],
  "name": "kudu",
  "type": "KUDU",
  "clusterRef": {
    "clusterName": "yore-cdh-test"
  },
  "serviceUrl": "http://cdh1.yore.com:7180/cmf/serviceRedirect/kudu",
  "serviceState": "STARTED",
  "healthSummary": "GOOD",
  "configStale": false
}

5.5 查看指定服務的角色信息

curl -u admin:admin 'http://cdh1:7180/api/v1/clusters/yore-cdh-test/services/kudu/roles'

從返回的JSON 結果可以看到 Kudu 各個角色實例的運行信息,這裏通過 hostId 和 5.1 查看集羣所有主機信息 部分獲取的信息可以得知是哪個節點的服務,例如獲取 hostname 爲 cdh1.yore.com 上的 Kudu 的 Tablet Server ,通過 5.1 查看集羣所有主機信息 可以知道 hostname 爲 cdh1.yore.com 的 hostId 爲 ecf4247c-xxxx-438e-b026-d77becff1fbe,通過下面的 JSON 可以看到 hostId 爲 ecf4247c-xxxx-438e-b026-d77becff1fbe 的 Tablet Server 的 name 爲 kudu-KUDU_TSERVER-90ffd2c4da706e992590ec4ad20ec5a3

{
  "items": [
    {
      "healthChecks": [
        {
          "name": "KUDU_KUDU_TSERVER_FILE_DESCRIPTOR",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_HOST_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_LOG_DIRECTORY_FREE_SPACE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_SCM_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_SWAP_MEMORY_USAGE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_UNEXPECTED_EXITS",
          "summary": "GOOD"
        }
      ],
      "name": "kudu-KUDU_TSERVER-90ffd2c4da706e992590ec4ad20ec5a3",
      "type": "KUDU_TSERVER",
      "serviceRef": {
        "clusterName": "yore-cdh-test",
        "serviceName": "kudu"
      },
      "hostRef": {
        "hostId": "ecf4247c-xxxx-438e-b026-d77becff1fbe"
      },
      "roleUrl": "http://cdh1.yore.com:7180/cmf/roleRedirect/kudu-KUDU_TSERVER-90ffd2c4da706e992590ec4ad20ec5a3",
      "roleState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    },
    {
      "healthChecks": [
        {
          "name": "KUDU_KUDU_MASTER_FILE_DESCRIPTOR",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_MASTER_HOST_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_MASTER_LOG_DIRECTORY_FREE_SPACE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_MASTER_SCM_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_MASTER_SWAP_MEMORY_USAGE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_MASTER_UNEXPECTED_EXITS",
          "summary": "GOOD"
        }
      ],
      "name": "kudu-KUDU_MASTER-ec14a1fa91e54c0ec078bbc575a3db83",
      "type": "KUDU_MASTER",
      "serviceRef": {
        "clusterName": "yore-cdh-test",
        "serviceName": "kudu"
      },
      "hostRef": {
        "hostId": "9e512856-xxxx-4608-8891-0573cdc68bee"
      },
      "roleUrl": "http://cdh1.yore.com:7180/cmf/roleRedirect/kudu-KUDU_MASTER-ec14a1fa91e54c0ec078bbc575a3db83",
      "roleState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    },
    {
      "healthChecks": [
        {
          "name": "KUDU_KUDU_TSERVER_FILE_DESCRIPTOR",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_HOST_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_LOG_DIRECTORY_FREE_SPACE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_SCM_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_SWAP_MEMORY_USAGE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_UNEXPECTED_EXITS",
          "summary": "GOOD"
        }
      ],
      "name": "kudu-KUDU_TSERVER-ec14a1fa91e54c0ec078bbc575a3db83",
      "type": "KUDU_TSERVER",
      "serviceRef": {
        "clusterName": "yore-cdh-test",
        "serviceName": "kudu"
      },
      "hostRef": {
        "hostId": "9e512856-xxxx-4608-8891-0573cdc68bee"
      },
      "roleUrl": "http://cdh1.yore.com:7180/cmf/roleRedirect/kudu-KUDU_TSERVER-ec14a1fa91e54c0ec078bbc575a3db83",
      "roleState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    },
    {
      "healthChecks": [
        {
          "name": "KUDU_KUDU_TSERVER_FILE_DESCRIPTOR",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_HOST_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_LOG_DIRECTORY_FREE_SPACE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_SCM_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_SWAP_MEMORY_USAGE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_UNEXPECTED_EXITS",
          "summary": "GOOD"
        }
      ],
      "name": "kudu-KUDU_TSERVER-a4748f9a954f807e8e341f3f802b972f",
      "type": "KUDU_TSERVER",
      "serviceRef": {
        "clusterName": "yore-cdh-test",
        "serviceName": "kudu"
      },
      "hostRef": {
        "hostId": "6ce8ae83-xxxx-46e1-a47a-96201681a019"
      },
      "roleUrl": "http://cdh1.yore.com:7180/cmf/roleRedirect/kudu-KUDU_TSERVER-a4748f9a954f807e8e341f3f802b972f",
      "roleState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    }
  ]
}

5.6 重啓指定節點上的指定服務實例

通過上面的分析可以知道 hostname 爲 cdh1.yore.com 上的 Kudu 的 Tablet Server 的 角色實例 name 爲 kudu-KUDU_TSERVER-90ffd2c4da706e992590ec4ad20ec5a3,例如我們重啓此服務,也可以一次請求重啓多個服務實例,只需要將指定的 角色實例 name 填寫到 下面的 item jsonArray 中即可。

curl -X POST -H "Content-Type:application/json" -u admin:admin \
-d '{ "items": ["kudu-KUDU_TSERVER-90ffd2c4da706e992590ec4ad20ec5a3"] }' \
'http://cdh1:7180/api/v1/clusters/yore-cdh-test/services/kudu/roleCommands/restart'

沒有報錯信息,返回一個命令執行的 id,則服務重啓成功

{
  "errors" : [ ],
  "items" : [ {
    "id" : 5050,
    "name" : "Restart",
    "startTime" : "2019-06-14T02:10:59.726Z",
    "active" : true,
    "serviceRef" : {
      "clusterName" : "yore-cdh-test",
      "serviceName" : "kudu"
    },
    "roleRef" : {
      "clusterName" : "yore-cdh-test",
      "serviceName" : "kudu",
      "roleName" : "kudu-KUDU_TSERVER-a4748f9a954f807e8e341f3f802b972f"
    }
  } ]
}

最後

這個安裝的過程同樣適用於cdh 6.x的其它版本。因爲時間限制,文中有難免有錯誤,如果各位查閱發現文中有錯誤,或者安裝中有其它問題,歡迎留言交流


說明:本文檔整個安裝過程是在 x86 架構的系統上進行部署,這個也是大數據生態支持最友好的,如果系統環境是 aarch64架構(屬於 ARMv8架構的一種執行狀態),目前官網沒有對其直接的支持,這個需要對源碼重新編譯,華爲有對其進行了部分的支持,移植指南可以參考官方華爲雲鯤鵬官方文檔移植指南(CDH)


同時關於 JDK 或者 CDH 升級可以查看我的另一篇 blog DH之JDK 版本升級(Open JDK1.8)和cdh升級



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章