大數據技術之Hadoop之HDFS(4)——HDFS搭建客戶端並測試

第四章:HDFS搭建客戶端並測試

4.1 測試連接虛擬機

第一步:用IDEA創建Maven形式的Java項目

在這裏插入圖片描述
在這裏插入圖片描述

第二步:添加Maven依賴

在pom.xml添加HDFS的座標,有些座標用不了,但是我們後面的項目要用,可以全部先加進來

<dependencies>
    <dependency>
         <groupId>junit</groupId>
         <artifactId>junit</artifactId>
         <version>4.12</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-client</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-common</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-hdfs</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-yarn-common</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-yarn-client</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-mapreduce-client-core</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-mapreduce-client-common</artifactId>
         <version>3.1.2</version>
     </dependency>
     <dependency>
         <groupId>net.minidev</groupId>
         <artifactId>json-smart</artifactId>
         <version>2.3</version>
     </dependency>
     <dependency>
         <groupId>org.apache.logging.log4j</groupId>
         <artifactId>log4j-core</artifactId>
         <version>2.12.1</version>
     </dependency>
     <dependency>
         <groupId>org.anarres.lzo</groupId>
         <artifactId>lzo-hadoop</artifactId>
         <version>1.0.6</version>
     </dependency>
</dependencies>

第三步:在resources目錄下加入日誌文件

log4j.properties

log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n
log4j.appender.logfile=org.apache.log4j.FileAppender
log4j.appender.logfile.File=target/spring.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n

第四步:測試java代碼

public class HDFSTest {
    /**
     * 測試java連接Hadoop的HDFS
     * @throws URISyntaxException
     * @throws IOException
     */
    @Test
    public void test() throws URISyntaxException, IOException {
    	//虛擬機連接名,必須在本地配置域名,不然只能IP地址訪問
        String hdfs = "hdfs://hadoop101:9000";
        // 1 獲取文件系統
        Configuration cfg = new Configuration ();
        FileSystem fs = FileSystem.get (new URI (hdfs), cfg);

        System.out.println (cfg);
        System.out.println (fs);
        System.out.println("HDFS開啓了!!!");
    }
}

第五步:運行得到結果(表示成功)

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章