Hue從源碼編譯到支持Hive全流程

在這裏插入圖片描述


安裝Hue

測試集羣:hadoop101 hadoop102 hadoop103
集羣配置:阿里雲3臺雲服務器 centos7.5 2core 8Gmemory
集羣框架:
hue:hadoop102
hadoop:hadoop101 hadoop102 hadoop103
hive:hadoop 101
mysql: hadoop101
zookeeper: hadoop101 hadoop102 hadoop103
kafka: hadoop101 hadoop102 hadoop103
spark: hadoop101
可直接使用編譯後的hue,修改配置成自己集羣配置即可
本文默認已配置好上述集羣中除hue以外的所有框架
注意:結合自己的集羣修相關節點配置即可


(1)在102機器上裝hue,創建software文件夾,上傳壓縮包並解壓

[root@hadoop102 software]# unzip hue-master.zip -d /opt/module/

(2)安裝環境

[root@hadoop102 software]# cd /opt/module/hue-master/
[root@hadoop102 hue-master]# sudo yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain gcc gcc-c++ krb5-devel libffi-devel libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel

(3)編譯安裝,編譯完成後生成build文件夾

[hue@hadoop102 hue-master]# make apps

(4)修改hadoop配置文件,因爲集羣HA模式,所以採用httpfs模式

[hue@hadoop102 hue-master]# cd /opt/module/hadoop-3.1.3/etc/hadoop/
[hue@hadoop102 hadoop]# vim hdfs-site.xml 
<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>
[hue@hadoop102 hadoop]# vim core-site.xml
<property>
  <name>hadoop.proxyuser.root.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.root.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.groups</name>
  <value>*</value>	
</property>
[hue@hadoop102 hadoop]# vim httpfs-site.xml
<property>
  <name>hadoop.proxyuser.root.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.root.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.groups</name>
  <value>*</value>
</property>

(5)分發配置文件

[hue@hadoop102 etc]# pwd
/opt/module/hadoop-3.1.3/etc
[hue@hadoop102 etc]scp -r hadoop/ hadoop101:/opt/module/hadoop-3.1.3/etc/
[hue@hadoop102 etc]scp -r hadoop/ hadoop103:/opt/module/hadoop-3.1.3/etc/

(6)修改hue配置文件,集成hdfs

[hue@hadoop102 hadoop]# cd /opt/module/hue-master/desktop/conf
[hue@hadoop102 conf]# vim pseudo-distributed.ini
 http_host=hadoop102
  http_port=8000
[[[default]]]
      # Enter the filesystem uri
       fs_defaultfs=hdfs://mycluster:8020
logical_name=mycluster
     webhdfs_url=http://hadoop102:14000/webhdfs/v1
 hadoop_conf_dir=/opt/module/hadoop-3.1.3/etc/hadoop/conf

(7)集成yarn,以下logic_name需要對應yarn-site.xml文件裏的配置,注意:一定要解開HA標籤前面的註釋,否則會報錯

 [[yarn_clusters]]

    [[[default]]]
      # Enter the host on which you are running the ResourceManager
      ## resourcemanager_host=mycluster

      # The port where the ResourceManager IPC listens on
      ## resourcemanager_port=8032

      # Whether to submit jobs to this cluster
      	submit_to=True

      # Resource Manager logical name (required for HA)
      logical_name=rm1

      # Change this if your YARN cluster is Kerberos-secured
      ## security_enabled=false

      # URL of the ResourceManager API
      resourcemanager_api_url=http://hadoop101:8088

      # URL of the ProxyServer API
      ## proxy_api_url=http://hadoop101:8088

      # URL of the HistoryServer API
      ## history_server_api_url=http://localhost:19888

     [[[ha]]]
      # Resource Manager logical name (required for HA)
      logical_name=rm2

      # Un-comment to enable
      submit_to=True

      # URL of the ResourceManager API
      resourcemanager_api_url=http://hadoop103:8088

      # ...

(8)集成mysql

 [[database]]
    # Database engine is typically one of:
    # postgresql_psycopg2, mysql, sqlite3 or oracle.
    #
    # Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name.
    # Note for Oracle, options={"threaded":true} must be set in order to avoid crashes.
    # Note for Oracle, you can use the Oracle Service Name by setting "host=" and "port=" and then "name=<host>:<port>/<service_name>".
    # Note for MariaDB use the 'mysql' engine.
     engine=mysql
     host=hadoop101
     port=3306
     user=root
     password=123456
  # conn_max_age option to make database connection persistent value in seconds
    # https://docs.djangoproject.com/en/1.9/ref/databases/#persistent-connections
    ## conn_max_age=0
    # Execute this script to produce the database password. This will be used when 'password' is not set.
    ## password_script=/path/script
    name=hue
在mysql中創建hue庫
[hue@hadoop101 apache-hive-3.1.2-bin]# mysql -uroot -p123456
mysql> create database hue;

 ## [[[mysql]]]
      # Name to show in the UI.
      ## nice_name="My SQL DB"

      # For MySQL and PostgreSQL, name is the name of the database.
      # For Oracle, Name is instance of the Oracle server. For express edition
      # this is 'xe' by default.
       name=huemetastore
# Database backend to use. This can be:
      # 1. mysql
      # 2. postgresql
      # 3. oracle
       engine=mysql
       port=3306
       user=root
       password=123456

(9)集成hive

[beeswax]

  # Host where HiveServer2 is running.
  # If Kerberos security is enabled, use fully-qualified domain name (FQDN).
   hive_server_host=hadoop101
# Port where HiveServer2 Thrift server runs on.
   hive_server_port=10000
 hive_conf_dir=/opt/module/apache-hive-3.1.2-bin/conf/

(10)停止集羣,重啓

[root@hadoop101 software]# /opt/module/apache-zookeeper-3.5.7-bin/bin/zkServer.sh start
[root@hadoop102 software]# /opt/module/apache-zookeeper-3.5.7-bin/bin/zkServer.sh start
[root@hadoop103 software]# /opt/module/apache-zookeeper-3.5.7-bin/bin/zkServer.sh start
[root@hadoop101 software]# start-all.sh

(11)啓動hive服務

[root@hadoop101 apache-hive-3.1.2-bin]# nohup hive --service metastore >metasotre.log>&1 &
[root@hadoop101 apache-hive-3.1.2-bin]# nohup hive --service hiveserver2 >hiveserver2.log >&1 &

(12)新增用戶hue,並修改文件夾權限

[root@hadoop102 hue-master]# useradd hue
[root@hadoop102 hue-master]# passwd hue
[root@hadoop102 hue-master]# chown -R hue:hue /opt/module/hue-master/

(13)初始化數據庫

[root@hadoop102 hue-master]# build/env/bin/hue syncdb
[root@hadoop102 hue-master]# build/env/bin/hue migrate

(14)啓動hue,注意啓動的時候必須啓動hive的兩個服務,並且metastore和hiveserver2兩個服務必須啓動,10000端口占用着

[root@hadoop102 hue-master]# build/env/bin/supervisor

(15)登錄hue,執行hive sql
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章