windows運行 HiveContext 報錯

 

  裝在 windows 的 ideal 運行  val sqlContext = new HiveContext(sc)   報 錯

Caused by: java.lang.NullPointerException

at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)

at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)

at org.apache.hadoop.util.Shell.run(Shell.java:455)

at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:774)

at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:646)

at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:434)

at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:281)

at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:639)

at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:567)

at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)

... 17 more

 

 

Exception in thread "main" java.lang.RuntimeException: java.lang.NullPointerException

at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)

at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:204)

at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:238)

at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:218)

at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:208)

at org.apache.spark.sql.hive.HiveContext.functionRegistry$lzycompute(HiveContext.scala:462)

at org.apache.spark.sql.hive.HiveContext.functionRegistry(HiveContext.scala:461)

at org.apache.spark.sql.UDFRegistration.<init>(UDFRegistration.scala:40)

at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:330)

at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:90)

at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)

at com.ibeifeng.bigdata.spark.sql.SparkSQLHiveTests$.main(SparkSQLHiveTests.scala:38)

at com.ibeifeng.bigdata.spark.sql.SparkSQLHiveTests.main(SparkSQLHiveTests.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)

 

windows本地運行mapreduce

原因分析:錯誤在 hadoop ,hadoop是爲linux機器設置的,當在windows上運行的時候會出現兼容問題,如果在windows運行mapreduce同樣會出現這樣的問題

解決方法:查看哪裏的源碼出現問題(不同的機器運行出錯的源碼不同),然後在源碼中找到這些類,在自己的代碼中 src/java 目錄下建一個和源碼相同的包,

把源碼copy進去,哪裏出錯就註釋哪裏,直到沒有錯誤爲止

原理:運行程序的時候先使用項目的代碼,再使用源碼,項目中的代碼把源碼的錯誤覆蓋了


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章