Hadoop Hive Spark安裝配置指南

1 安裝Hadoop

1.1 下載hadoop-2.7.x壓縮包並解壓至目標目錄,修改$HODOOP_HOME/etc/hadoop下幾個文件:

  • hadoop-env.sh,檢查JAVA_HOME、HADOOP_CONF_DIR配置是否正確;
  • core-site.xml,加入如下配置:
<property>
       <name>hadoop.tmp.dir</name>
       <value>file:/data/hadoop-2.7.3/tmp</value>
</property>
<property>
       <name>fs.defaultFS</name>
       <value>hdfs://localhost:8000</value>
</property>
  • hdfs-site.xml,加入如下配置:
    <property>
       <name>dfs.replication</name>
       <value>1</value>
    </property>
    <property>
        <name>dfs.http.address</name>
        <value>0.0.0.0:50070</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/data/hadoop-2.7.3/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/data/hadoop-2.7.3/tmp/dfs/data</value>
    </property>
  • yarn-site.xml,加入如下配置:
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  • mapred-site.xml,加入如下配置:
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>

1.2 修改/etc/profile,加入如下環境設置:

export JAVA_HOME=/data/jdk1.8.0_141
export SCALA_HOME=/data/scala-2.11.8
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export HADOOP_HOME=/data/hadoop-2.7.3
export HIVE_HOME=/data/apache-hive-2.1.0
export YARN_HOME=$HADOOP_HOME
export HIVE_CONF_DIR=$HIVE_HOME/conf
export SPARK_HOME=/data/spark-2.4.0-bin-hadoop2.7
export PATH=$PATH:$JAVA_HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin:$SPARK_HOME/bin

1.3 設置個人賬號免密登陸

使用ssh-keygen生成公私鑰,一路enter;
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

1.4 初始化hadoop dfs

hdfs namenode -format

1.5 驗證hadoop安裝配置

$HADOOP_HOME/sbin/start-all.sh
觀察控制檯日誌輸出,應該可以看到hadoop各組件依次啓動成功的日誌(如有報錯則逐個google解決之):
Starting namenodes on [localhost]
。。。
Starting secondary namenodes [0.0.0.0]
。。。
starting yarn daemons
。。。
starting resourcemanager
。。。
localhost: starting nodemanager

2 安裝Hive

2.1 下載安裝mysql

sudo yum localinstall https://dev.mysql.com/get/mysql57-community-release-el7-9.noarch.rpm
sudo yum install mysql-community-server
sudo service mysqld start
grep 'A temporary password' /var/log/mysqld.log |tail -1
mysql -h localhost -u root -p${temporary password from above}
mysql> CREATE USER 'hive'@'%' IDENTIFIED BY 'yourpasswd';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;
mysql> create database hive_db;

2.2 下載hive-2.1.x壓縮包並解壓至目標目錄,修改$HIVE_HOME/conf下幾個文件:

HADOOP_HOME=/data/hadoop-2.7.3
# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/data/apache-hive-2.1.0/conf
# Folder containing extra ibraries required for hive compilation/execution can be controlled by:
export HIVE_AUX_JARS_PATH=/data/apache-hive-2.1.0/lib
  • hive-site.xml,修改/加入如下配置:
<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost:3306/hive_db?createDatabaseIfNotExist=true</value>
</property>
<property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
</property>
<property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
</property>
<property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>yourpasswd</value>
</property>
<property>
  <name>hive.exec.local.scratchdir</name>
  <value>/data/apache-hive-2.1.0/iotmp</value>
  <description>Local scratch space for Hive jobs</description>
</property>
<property>
  <name>hive.downloaded.resources.dir</name>
  <value>/data/apache-hive-2.1.0/iotmp/${hive.session.id}_resources</value>
  <description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
  <name>hive.metastore.warehouse.dir</name>
  <value>/data/apache-hive-2.1.0/warehouse</value>
  <description>location of default database for the warehouse</description>
</property>
<property>
  <name>hive.server2.logging.operation.log.location</name>
  <value>/data/apache-hive-2.1.0/iotmp/operation_logs</value>
  <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>

2.3 初始化hive schema

cp mysql-connector-java-5.1.37.jar /data/apache-hive-2.1.0/lib
schematool -dbType mysql -initSchema --verbose

2.4 驗證安裝使用

hive --service metastore &
hive -e “show databases”

3 Spark安裝

3.1 下載解壓安裝配置scala

過程跟jdk安裝類似,不再贅述

3.2 修改spark配置文件

  • 修改spark-env.sh,加入如下配置:
    JAVA_HOME=/data/jdk1.8.0_91
    SCALA_HOME=/data/scala-2.12.8
    SPARK_MASTER_HOST=localhost
    SPARK_MASTER_IP=localhost
    SPARK_MASTER_PORT=7077
    SPARK_MASTER_WEBUI_PORT=8080
    SPARK_WORKER_MEMORY=4g
    HADOOP_HOME=/data/hadoop-2.7.3
    HADOOP_CONF_DIR=/data/hadoop-2.7.3/etc/hadoop
    SPARK_DIST_CLASSPATH=/data/hadoop-2.7.3/etc/hadoop:/data/hadoop-2.7.3/share/hadoop/common/lib/:/data/hadoop-2.7.3/share/hadoop/common/:/data/hadoop-2.7.3/share/hadoop/hdfs:/data/hadoop-2.7.3/share/hadoop/hdfs/lib/:/data/hadoop-2.7.3/share/hadoop/hdfs/:/data/hadoop-2.7.3/share/hadoop/yarn/lib/:/data/hadoop-2.7.3/share/hadoop/yarn/:/data/hadoop-2.7.3/share/hadoop/mapreduce/lib/:/data/hadoop-2.7.3/share/hadoop/mapreduce/:/data/hadoop-2.7.3/contrib/capacity-scheduler/*.jar
  • 修改slaves配置文件:
    cp $SPARK_HOME/conf/slaves.template $SPARK_HOME/conf/slaves && echo localhost >> $SPARK_HOME/conf/slaves

3.3 啓動驗證安裝

$SPARK_HOME/sbin/start-master.sh
$SPARK_HOME/sbin/start-slave.sh
$SPARK_HOME/bin/spark-shell

4 問題

在測試服務器安裝spark後啓動spark-shell,報錯:

Failed to initialize compiler: object java.lang.Object in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programmatically, settings.usejavacp.value = true.

嘗試更換spark版本/scala版本/java版本,以及傳入指定的-Dscala.usejavacp=true均無法解決,在開發機本機沒有這個問題,暫時遺留。
系統內核版本:3.10.0-514.26.2.el7.x86_64
操作系統版本:CentOS Linux release 7.2.1511 (Core)

5 參考資料

https://my.oschina.net/miger/blog/1818865
https://tecadmin.net/install-mysql-5-7-centos-rhel/
https://stackoverflow.com/questions/34408677/starting-hadoop-daemons-without-password
https://www.mtyun.com/library/how-to-setup-scala-on-centos7

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章