使用到的工具
xshell
centos7
xtpf
apache-hive-2.3.6-bin
mysql的驅動
第一步:將下載好的hive安裝包上傳到/usr/local目錄下解壓
解壓命令 tar -zxvf apache-hive-2.3.6-bin.tar.gz
將解壓的文件改名 mv apache-hive-2.3.6-bin /usr/local/hive
第二步:配置hive的環境變量
命令 vim /etc/profile
命令 export HIVE_HOME=/usr/local/hive
命令 export PATH=$PATH:$HIVE_HOME/bin
配置完環境變量後使用命令讓它生效
命令 source /etc/profile
![在這裏插入圖片描述](https://img-blog.csdnimg.cn/
第三步檢驗環境變量是否生效有版本顯示即生效:
命令 hive --version
第五步:在hive的配置文件的目錄上將xxxxxxxxxxx改 爲hive-site.xml
命令: cp hive-default.xml.template hive-site.xml
第六步:創建在hive的hive-site.xml文件中對應的hdfs目錄並賦予相應的權限但是首先我們要先開啓hadoop集羣或是僞分佈服務
命令 start-all.sh 開啓hadoop的服務
創建目錄並賦予相應的權限
命令 hadoop fs -mkdir -p /user/hive/warehouse
命令 hadoop fs -mkdir -p /tmp/hive
命令 hadoop fs -chmod -R 777 /user/hive/warehouse
命令 hadoop fs -chmod -R 777 /tmp/hive
命令 hadoop fs -ls /
第七步查看/usr/local/hive目錄下是否有temp文件夾如果沒有需要自己創建
命令 cd /usr/local/hive
命令 ls
命令 mkdir temp
命令 ls
賦予它相應的權限 命令chmod -R 777 temp
第八步:修改hive的配置文件 hive-site.xml
搜索的命令是 :/搜索的東西
如 :/hive.exec.local.scratchdir
自己按照自己的路徑來改!
:/hive.exec.local.scratchdir
<property>
<name>hive.exec.local.scratchdir</name>
<value>/usr/local/hive/temp/root</value>
<description>Local scratch space for Hive jobs</description>
</property>
:/hive.downloaded.resources.dir
<property>
<name>hive.downloaded.resources.dir</name>
<value>/usr/local/hive/temp/${hive.session.id}_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
:/hive.server2.logging.operation.log.location
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>usr/local/hive/temp/root/operation_logs</value>
<description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>
:/hive.querylog.location
<property>
<name>hive.querylog.location</name>
<value>/usr/local/hive/temp/root</value>
<description>Location of Hive run time structured log file</description>
</property>
:/javax.jdo.option.ConnectionURL
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.121.110:3306/hive?createDatabaseIfNotExist=true&characterEncoding=UTF-8</value>
<description> 自己的ip地址
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
以下是mysql相關配置注意自己安裝的mysql版本
# 數據庫的驅動類名稱
# 新版本8.0版本的驅動爲com.mysql.cj.jdbc.Driver
# 舊版本5.x版本的驅動爲com.mysql.jdbc.Driver
# 本記錄驅動版本爲5.1.47
:/javax.jdo.option.ConnectionDriverName
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
:/javax.jdo.option.ConnectionUserName
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value> 當前用戶
<description>Username to use against metastore database</description>
</property>
:/javax.jdo.option.ConnectionPassword
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value> # mysql密碼
</property>
:/hive.metastore.schema.verification
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value> 一定要改爲false
<description>
Enforce metastore schema version consistency.
True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic
schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
proper metastore schema migration. (Default)
False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
</description>
</property>
<property>
<name>hive.metastore.schema.verification.record.version</name>
<value>false</value> 一定要改爲false
<description>
When true the current MS version is recorded in the VERSION table. If this is disabled and verification is
enabled the MS will be unusable.
</description>
</property>
第九步:複製並更名hive-log4j2.properties.template爲 hive-log4j2.properties文件:
第十步:配置hive-env.sh
首先我們先複製並更名hive-env.sh.template爲 hive-env.sh文件
命令 cp hive-env.sh.template hive-env.sh
配置文件如下:
# Set HADOOP_HOME to point to a specific hadoop install directory
# HADOOP_HOME=${bin}/../../hadoop
HADOOP_HOOME=/usr/local/hadoop-2.7.1 hdoop安裝目錄
# Hive Configuration Directory can be controlled by:
# export HIVE_CONF_DIR=
export HIVE_CONF_DIR=/usr/local/hive/conf hive的配置文件目錄
# Folder containing extra libraries required for hive compilation/execution can be controlled by:
# export HIVE_AUX_JARS_PATH=
export HIVE_AUX_JARS_PATH=/usr/local/hive/lib hive依賴的jar包目錄
第十一步:啓動mysql創建hive數據庫
命令 : mysql -uroot -p
命令 : create database hive;
Mysql 的權限設置:
命令 :grant all privileges on *.* to 'root'@'%'identified by '密碼'with grant option;
命令:flush privileges; #刷新權限
上傳mysql連接驅動到/usr/local/hive並解壓,然後複製到hive目錄中的lib中(如下圖pwd路徑)
複製過去後到/usr/local/hive/lib目錄下查詢jar包是否在lib目錄下
命令 ll mysql-connector-java-5.1.47-bin.jar
命令:
在/usr/local/hive/mysql-connector-java-5.1.47目錄下輸入
cp mysql-connector-java-5.1.47-bin.jar /usr/local/hive/lib
進入hive的bin目錄進行初始化:
命令cd /usr/local/hive/bin
初始化命令 schematool -dbType mysql -initSchema
可能出現的問題:
啓動hive使用命令hive即可
命令 hive