R語言tm工具包進行文本挖掘實驗

tm包是R語言中爲文本挖掘提供綜合性處理的package,進行操作前載入tm包,vignette命令可以讓你得到相關的文檔說明。本文從數據導入、語料庫處理、預處理、元數據管理、創建term-document矩陣這幾個方面講述tm包括的使用。

    >library(tm)        //使用默認安裝的R平臺是不帶tm  package的,必須要到http://www.r-project.org/網站下載package. 值得注意的是:tm package很多函數也要依賴於其它的一些package,所以在這個網站,應該把rJava,Snowball,zoo,XML,slam,Rz,Rweka,matlab這些win32 package一併下載,並解壓到默認的library中去。

    >vignette("tm")   //會打開一個tm.pdf的英文文件,講述tm   package的使用及相關函數

1、Data-import:

    >  txt <- system.file("texts", "txt", package = "tm")          //是爲將目錄C:\Program Files\R\R-2.15.1\library\tm\texts\txt 記入txt變量

    > (ovid <- Corpus(DirSource(txt),readerControl = list(language = "lat")))  //即將txt目錄下的5個文件Corpus到Ovid去,language = "lat"表示the directory txt containing Latin (lat) texts

      此外,VectorSource is quite useful, as it can create a corpus from character vectors, e.g.:

    > docs <- c("This is a text.", "This another one.")

    > Corpus(VectorSource(docs))      //A corpus with 2 text documents

    在本部分中,我們Finally create a corpus for some Reuters documents as example for later use

    > reut21578 <- system.file("texts", "crude", package = "tm")

    > reuters <- Corpus(DirSource(reut21578),readerControl = list(reader = readReut21578XML))  // 在這一部分中,將目錄C:\Program Files\R\R-2.15.1\library\tm\texts\crude下的20個XML文件Corpus成reuters,要用到XML package(前面已經下載了).

    > inspect(ovid[1:2])      //會出現以下的顯示,當然identical(ovid[[2]], ovid[["ovid_2.txt"]])==true,所以inspet(ovid["ovid_1.txt","ovid[ovid_2.txt]"])效果一樣:

           

2、Transmation:

   > reuters <- tm_map(reuters, as.PlainTextDocument)    //This can be done by converting the documents to plain text documents.即去除標籤

   > reuters <- tm_map(reuters, stripWhitespace)              //去除空格

   > reuters <- tm_map(reuters, tolower)                              //將內容轉換成小寫

   > reuters <- tm_map(reuters, removeWords, stopwords("english"))      // remove stopwords

    注:在這裏需要注意的是,如果使用中文分詞法,由於詞之間無有像英文一樣的空隔,好在有Java已經解決了這樣的問題,我們只需要在R-console里加載rJava與rmmseg4j兩個工具包即可。如

    >mmseg4j("中國人民從此站起來了")

      [1] 中國  人民  從此  站  起來


3、Filters:

   > query <- "id == '237' & heading == 'INDONESIA SEEN AT CROSSROADS OVER ECONOMIC CHANGE'"     //query其實是一個字符串,設定了一些文件的條件,如

                                                                                                                                                                                                    //id==237, 標題爲:indonesia seen at c.........

   > tm_filter(reuters, FUN = sFilter, query)       //  A corpus with 1 text document,這個從數據中就可以看得出來。

4、Meta data management

    > DublinCore(crude[[1]], "Creator") <- "Ano Nymous"      //本來第一個XML文件中是不帶作者的,此語句可以改變一些屬性的值,類比其它。

    > meta(crude[[1]])                                                                   //顯示第一個文件的元素信息數據得到下圖

               

     > meta(crude, tag = "test", type = "corpus") <- "test meta"
     > meta(crude, type = "corpus")
                      改變元素後顯示如下

           
5、Creating Term-Document Matrices
      > dtm <- DocumentTermMatrix(reuters)
      > inspect(dtm[1:5, 100:105])
          //顯示如下:
                       A document-term matrix (5 documents, 6 terms)
                       Non-/sparse entries: 1/29
                       Sparsity : 97%
                       Maximal term length: 10
                       Weighting : term frequency (tf)
                       Terms
                       Docs abdul-aziz ability able abroad, abu accept
                       127 0 0 0 0 0 0
                       144 0 2 0 0 0 0
                       191 0 0 0 0 0 0
                       194 0 0 0 0 0 0
                        211 0 0 0 0 0 0

6、對Term-document矩陣的進一步操作舉例

      > findFreqTerms(dtm, 5)    //nd those terms that occur at least 5 times in these 20 files    顯示如下:

               [1] "15.8" "accord" "agency" "ali"
               [5] "analysts" "arab" "arabia" "barrel."
               [9] "barrels" "bpd" "commitment" "crude"
               [13] "daily" "dlrs" "economic" "emergency"
               [17] "energy" "exchange" "exports" "feb"
               [21] "futures" "government" "gulf" "help"
               [25] "hold" "international" "january" "kuwait"
               [29] "march" "market"

      > findAssocs(dtm, "opec", 0.8)            // Find associations (i.e., terms which correlate) with at least 0:8 correlation for the term opec

             opec          prices.             15.8

              1.00          0.81                  0.80

     如果需要考察多個文檔中特有詞彙的出現頻率,可以手工生成字典,並將它作爲生成矩陣的參數

     > d <- Dictionary(c("prices", "crude", "oil")))
     > inspect(DocumentTermMatrix(reuters, list(dictionary = d)))

      因爲生成的term-document矩陣dtm是一個稀疏矩陣,再進行降維處理,之後轉爲標準數據框格式

      > dtm2 <- removeSparseTerms(dtm, sparse=0.95)          //parse值越少,最後保留的term數量就越少
      > data <- as.data.frame(inspect(dtm2))                               //最後將term-document矩陣生成數據框就可以進行聚類等操作了見下部分

7、 再之後就可以利用R語言中任何工具加以研究了,下面用層次聚類試試看

       > data.scale <- scale(data)
       > d <- dist(data.scale, method = "euclidean")
       > fit <- hclust(d, method="ward")

       >plot(fit)    //圖形見下:


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章