1.安裝hive(配置好)
2.將配置好的hive-site.xml放入$SPARK-HOME/conf目錄下
3.將mysql-xxx-connector-xx.jar放到spark集羣中節點的lib目錄下
4.啓動spark-shell時指定mysql連接驅動位置
bin/spark-shell \
--master spark://node1.itcast.cn:7077 \
--executor-memory 1g \
--total-executor-cores 2 \
--driver-class-path /usr/local/apache-hive-0.13.1-bin/lib/mysql-connector-java-5.1.35-bin.jar
4.使用sqlContext.sql調用HQL
sqlContext.sql("select * from spark.person limit 2")
或使用org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.hive.HiveContext
val hiveContext = new HiveContext(sc)
hiveContext.sql("select * from spark.person").show()
或者.write.mode("append").jdbc()