NLTK自然語言處理庫

  自然語言處理,通常簡稱爲NLP,是人工智能的一個分支,處理使用自然語言的計算機與人之間的交互。NLP的最終目標是以有價值的方式閱讀,解讀,理解和理解人類語言。大多數NLP技術都依靠機器學習來從人類語言中獲取含義。

實際上,使用自然語言處理的人與機器之間的典型交互可以如下:

  • 人與機器對話
  • 機器捕獲音頻
  • 進行音頻到文本的轉換
  • 處理文本數據
  • 進行數據到音頻的轉換
  • 機器通過播放音頻文件來響應人類

自然語言處理是以下常見應用程序的推動力:

  • 語言翻譯應用程序,例如Google Translate

爲什麼NLP很難

自然語言處理被認爲是計算機科學中的一個難題。人類語言的本質使NLP變得困難。要求使用自然語言傳遞信息的規則對於計算機而言並不容易理解。其中一些規則可以是高級的和抽象的。例如,當某人使用諷刺性言論傳遞信息時。另一方面,其中一些規則可能是低級的。例如,使用字符“ s”表示多個項目。全面理解人類語言需要同時理解單詞和概念之間的聯繫以傳遞預期的信息。雖然人類可以輕鬆掌握一種語言,但是自然語言的歧義和不精確的特徵使NLP難以實現。

自然語言處理如何工作

NLP要求應用算法來識別和提取自然語言規則,以便將非結構化語言數據轉換爲計算機可以理解的形式。提供文本後,計算機將利用算法提取與每個句子相關的含義並從中收集基本數據。有時,計算機可能無法很好地理解句子的含義,從而導致模糊的結果。

例如,1950年代在英語和俄語之間翻譯某些單詞時發生了一次幽默事件。

這是需要翻譯的聖經句子:

“精神願意,但是肉體很虛弱。”

這是將句子翻譯成俄語然後又譯回英語時的結果:

“伏特加酒很好,但是肉爛了。”

自然語言處理的常用處理過程:

  先針對訓練文本進行分詞處理(詞幹提取、原型提取),統計詞頻,通過詞頻-逆文檔頻率算法獲得該詞對樣本語義的貢獻,根據每個詞的貢獻力度,構建有監督分類模型。把測試樣本交給模型處理,得到測試樣本的語義類別。

本篇文章主要通過講解NLTK(語言語言處理庫)——來給大家講解 自然語言處理NLP

nltk.download()下載數據集

jieba中文分詞庫

文本分詞

分詞處理相關API:

import nltk.tokenize as tk

sent_list = tk.sent_tokenize(text)          # 把樣本按句子進行拆分  sent_list:句子列表
word_list = tk.word_tokenize(text)          # 把樣本按單詞進行拆分  word_list:單詞列表

# 把樣本按單詞進行拆分 punctTokenizer:分詞器對象
punctTokenizer = tk.WordPunctTokenizer()    
word_list = punctTokenizer.tokenize(text)

案例:

import nltk.tokenize as tk

doc = "Are you curious about tokenization? " \
      "Let's see how it works! " \
      "We need to analyze a couple of sentences " \
      "with punctuations to see it in action."
print(doc)
tokens = tk.sent_tokenize(doc)  # 句子分詞
for i, token in enumerate(tokens):
    print("%2d" % (i + 1), token)
# 1 Are you curious about tokenization?
# 2 Let's see how it works!
# 3 We need to analyze a couple of sentences with punctuations to see it in action.

tokens = tk.word_tokenize(doc)  # 單詞分詞
for i, token in enumerate(tokens):
    print("%2d" % (i + 1), token)
# 1 Are
# 2 you
# 3 curious
# 4 about
# ...
# 28 action
# 29 .
tokenizer = tk.WordPunctTokenizer()  # 單詞分詞
tokens = tokenizer.tokenize(doc)
for i, token in enumerate(tokens):
    print("%2d" % (i + 1), token)
# 1 Are
# 2 you
# 3 curious
# ...
# 27 it
# 28 in
# 29 action
# 30 .

詞幹提取

文本樣本中的單詞的 詞性 與 時態 對於語義分析並無太大影響,所以需要對單詞進行 詞幹提取

詞幹提取相關API:

import nltk.stem.porter as pt
import nltk.stem.lancaster as lc
import nltk.stem.snowball as sb

stemmer = pt.PorterStemmer() # 波特詞幹提取器,偏寬鬆
stemmer = lc.LancasterStemmer() # 朗卡斯特詞幹提取器,偏嚴格

# 思諾博詞幹提取器,偏中庸
stemmer = sb.SnowballStemmer('english') 
r = stemmer.stem('playing')     # 提取單詞playing的詞幹

案例:

import nltk.stem.porter as pt
import nltk.stem.lancaster as lc
import nltk.stem.snowball as sb

words = ['table', 'probably', 'wolves', 'playing', 'is',
         'dog', 'the', 'beaches', 'grounded', 'dreamt', 'envision']
pt_stemmer = pt.PorterStemmer()         # 波特詞幹提取器,偏寬鬆
lc_stemmer = lc.LancasterStemmer()      # 朗卡斯特詞幹提取器,偏嚴格
sb_stemmer = sb.SnowballStemmer('english')  # 思諾博詞幹提取器,偏中庸
for word in words:
    pt_stem = pt_stemmer.stem(word)
    lc_stem = lc_stemmer.stem(word)
    sb_stem = sb_stemmer.stem(word)
    print('%8s %8s %8s %8s' % (word, pt_stem, lc_stem, sb_stem))
#    table     tabl     tabl     tabl
# probably  probabl     prob  probabl
#   wolves     wolv     wolv     wolv
#  playing     play     play     play
#       is       is       is       is
#      dog      dog      dog      dog
#      the      the      the      the
#  beaches    beach    beach    beach
# grounded   ground   ground   ground
#   dreamt   dreamt   dreamt   dreamt
# envision    envis    envid    envis

詞性還原

  詞性還原與詞幹提取的作用類似,詞性還原更利於人工二次處理。因爲有些詞幹並非正確的單詞,人工閱讀更麻煩。詞性還原可以把名詞複數形式恢復爲單數形式動詞分詞形式恢復爲原型形式

詞性還原相關API:

import nltk.stem as ns
# 獲取詞性還原器對象
lemmatizer = ns.WordNetLemmatizer()
​
n_lemma = lemmatizer.lemmatize(word, pos='n')   # 把單詞word按照名詞進行還原
v_lemma = lemmatizer.lemmatize(word, pos='v')   # 把單詞word按照動詞進行還原

案例:

import nltk.stem as ns
words = ['table', 'probably', 'wolves', 'playing',
         'is', 'dog', 'the', 'beaches', 'grounded',
         'dreamt', 'envision']
lemmatizer = ns.WordNetLemmatizer()
for word in words:
    n_lemma = lemmatizer.lemmatize(word, pos='n')   # 名詞 詞性還原
    v_lemma = lemmatizer.lemmatize(word, pos='v')   # 動詞 詞性還原
    print('%8s %8s %8s' % (word, n_lemma, v_lemma))
#    table    table    table
# probably probably probably
#   wolves     wolf   wolves
#  playing  playing     play
#       is       is       be
#      dog      dog      dog
#      the      the      the
#  beaches    beach    beach
# grounded grounded   ground
#   dreamt   dreamt    dream
# envision envision envision

詞袋模型

  一句話的語義很大程度取決於某個單詞出現的次數,詞袋模型以每一個句子作爲一個樣本,用特徵名和特證值構建的數學模型稱爲“詞袋模型”

  • 特證名:句子中所有可能出現的單詞
  • 特證值:單詞在句子中出現的次數

The brown dog is running. The black dog is in the black room. Running in the room is forbidden.

1 The brown dog is running

2 The black dog is in the black room

3 Running in the room is forbidden

thebrowndogisrunningblackinroomforbidden
1 1 1 1 1 0 0 0 0
2 0 1 1 0 2 1 1 0
1 0 0 1 1 0 1 1 1


詞袋模型化相關API:

import sklearn.feature_extraction.text as ft

cv = ft.CountVectorizer()           # 構建詞袋模型

bow = cv.fit_transform(sentences)   # 訓練模型
print(bow.toarray())                # 獲取單詞出現的次數
words = cv.get_feature_names()      # 獲取所有特徵名

案例:

import nltk.tokenize as tk
import sklearn.feature_extraction.text as ft

doc = 'The brown dog is running. ' \
      'The black dog is in the black room. ' \
      'Running in the room is forbidden.'

# 對doc按照句子進行拆分
sents = tk.sent_tokenize(doc)

cv = ft.CountVectorizer()           # 構建詞袋模型
bow = cv.fit_transform(sents)       # 訓練詞袋模型
print(cv.get_feature_names())       # 獲取所有特徵名
# ['black', 'brown', 'dog', 'forbidden', 'in', 'is', 'room', 'running', 'the']
print(bow.toarray())
# [[0 1 1 0 0 1 0 1 1]
#  [2 0 1 0 1 1 1 0 2]
#  [0 0 0 1 1 1 1 1 1]]

詞頻(TF)

$$詞頻=\frac{單詞在句中出現的次數}{句子的總詞數}$$

  詞頻 :一個單詞在一個句子中出現的頻率。詞頻相比單詞的出現次數可以更加客觀的評估單詞對一句話的語義的貢獻度。詞頻越高,對語義的貢獻度越大。對詞袋矩陣歸一化即可得到詞頻。

案例:對詞袋矩陣進行歸一化

import nltk.tokenize as tk
import sklearn.feature_extraction.text as ft
import sklearn.preprocessing as sp
doc = 'The brown dog is running. The black dog is in the black room. ' \
      'Running in the room is forbidden.'

sentences = tk.sent_tokenize(doc)       # 通過句子分詞

cv = ft.CountVectorizer()
bow = cv.fit_transform(sentences)
print(bow.toarray())        # 詞 出現的次數
words = cv.get_feature_names()
print(words)                # 詞 特徵名
tf = sp.normalize(bow, norm='l1')
print(tf)                   # 詞頻

# [[0.    0.2   0.2     0.          0.        0.2          0.          0.2         0.2]
#  [0.25  0.    0.125   0.          0.125     0.125        0.125       0.         0.25 ]
#  [0.    0.    0.      0.16666667  0.16666667 0.16666667  0.16666667 0.16666667 0.16666667]]

文檔頻率(DF)

$$文檔頻率=\frac{含有某個單詞的文檔樣本數}{總文檔樣本數}$$

  DF越低,代表當前單詞對語義的貢獻越高

逆文檔頻率(IDF)

$$逆文檔頻率=\frac{總樣本數}{(含有某個單詞的樣本數+1)}$$

  IDF越高,代表當前單詞對語義的貢獻越高

詞頻-逆文檔頻率(TF-IDF)

詞頻矩陣中的每一個元素乘以相應單詞的逆文檔頻率,其值越大說明該詞對樣本語義的貢獻越大,根據每個詞的貢獻力度,構建學習模型。

獲取詞頻逆文檔頻率(TF-IDF)矩陣相關API:

# 獲取詞袋模型
cv = ft.CountVectorizer()
bow = cv.fit_transform(sentences).toarray()
# 獲取TF-IDF模型訓練器
tt = ft.TfidfTransformer()
tfidf = tt.fit_transform(bow).toarray()

案例:獲取TF_IDF矩陣:

import nltk.tokenize as tk
import sklearn.feature_extraction.text as ft
import numpy as np

doc = 'The brown dog is running. ' \
      'The black dog is in the black room. ' \
      'Running in the room is forbidden.'

# 對doc按照句子進行拆分
sents = tk.sent_tokenize(doc)

# 構建詞袋模型
cv = ft.CountVectorizer()
bow = cv.fit_transform(sents)

# TFIDF
tt = ft.TfidfTransformer()          # 獲取TF-IDF模型訓練器
tfidf = tt.fit_transform(bow)       # 訓練
print(np.round(tfidf.toarray(), 2))       # 精確到小數點後兩位
# [[0.   0.59 0.45 0.   0.   0.35 0.   0.45 0.35]
#  [0.73 0.   0.28 0.   0.28 0.22 0.28 0.   0.43]
#  [0.   0.   0.   0.54 0.41 0.32 0.41 0.41 0.32]]

文本分類(主題識別)

使用給定的文本數據集進行主題識別訓練,自定義測試集測試模型準確性。

import numpy as np
import sklearn.datasets as sd
import sklearn.feature_extraction.text as ft
import sklearn.naive_bayes as nb

train = sd.load_files('../machine_learning_date/20news',
                      encoding='latin1', shuffle=True,
                      random_state=7)
# train.data: 2968個樣本,每個樣本都是一篇郵件文檔
print(np.array(train.data).shape)       # (2968,)

# train.target: 2968個樣本,每個樣本都是文檔對應的類別
print(np.array(train.target).shape)     # (2968,)
print(train.target_names)
# ['misc.forsale', 'rec.motorcycles', 'rec.sport.baseball', 'sci.crypt', 'sci.space']

cv = ft.CountVectorizer()           # 詞袋模型
tt = ft.TfidfTransformer()          # 獲取TF-IDF模型訓練器

bow = cv.fit_transform(train.data)  # 訓練詞袋模型
tfidf = tt.fit_transform(bow)       # 訓練TF-IDF模型訓練器
print(tfidf.shape)              # (2968, 40605)

model = nb.MultinomialNB()      # 創建樸素貝葉斯模型
model.fit(tfidf, train.target)  # 訓練樸素貝葉斯模型

# 自定義測試集進行測試
test_data = [
    'The curveballs of right handed pitchers tend to curve to the left',
    'Caesar cipher is an ancient form of encryption',
    'This two-wheeler is really good on slippery roads']
# 怎麼訓練的,就必須怎麼預測
bow = cv.transform(test_data)
tfidf = tt.transform(bow)
pred_y = model.predict(tfidf)

for sent, index in zip(test_data, pred_y):
    print(sent, '->', train.target_names[index])
# The curveballs of right handed pitchers tend to curve to the left -> rec.sport.baseball
# Caesar cipher is an ancient form of encryption -> sci.crypt
# This two-wheeler is really good on slippery roads -> rec.motorcycles

性別識別

使用nltk提供的分類器對語料庫中英文男名與女名文本進行性別劃分訓練,最終進行性別驗證。

nltk提供的語料庫及分類方法相關API:

import nltk.corpus as nc
import nltk.classify as cf
​
# 讀取語料庫中names文件夾裏的male.txt文件,並且進行分詞
male_names = nc.names.words('male.txt')
​
'''
train_data的格式不再是樣本矩陣,nltk要求的數據格式如下:
[ ({'age': 15, 'score1': 95, 'score2': 95}, 'good'),
  ({'age': 15, 'score1': 45, 'score2': 55}, 'bad') ]
'''
# 基於樸素貝葉斯分類器訓練測試數據 
model = cf.NaiveBayesClassifier.train(train_data)
# 使用測試數據計算分類器精確度得分(測試數據格式與訓練數據一致)
ac = cf.accuracy(model, test_data)
# 對具體的某個樣本進行類別劃分
feature = {'age': 15, 'score1': 95, 'score2': 95}
gender = model.classify(feature)

案例:

import random
import nltk.corpus as nc
import nltk.classify as cf
male_names = nc.names.words('male.txt')
female_names = nc.names.words('female.txt')

data = []
for male_name in male_names:
    feature = {'feature': male_name[-2:].lower()}   # 取名字後面兩個字母
    data.append((feature, 'male'))
for female_name in female_names:
    feature = {'feature': female_name[-2:].lower()}
    data.append((feature, 'female'))
random.seed(7)
random.shuffle(data)
train_data = data[:int(len(data) / 2)]      # 用數據集的前一半作爲 訓練數據
test_data = data[int(len(data) / 2):]       # 用數據集的後一半作爲 測試訊據
model = cf.NaiveBayesClassifier.train(train_data)       # 樸素貝葉斯分類器
ac = cf.accuracy(model, test_data)

names, genders = ['Leonardo', 'Amy', 'Sam', 'Tom', 'Katherine', 'Taylor', 'Susanne'], []
for name in names:
    feature = {'feature': name[-2:].lower()}
    gender = model.classify(feature)
    genders.append(gender)
for name, gender in zip(names, genders):
    print(name, '->', gender)
# Leonardo -> male
# Amy -> female
# Sam -> male
# Tom -> male
# Katherine -> female
# Taylor -> male
# Susanne -> female

nltk分類器

  nltk提供了樸素貝葉斯分類器方便的處理自然語言相關的分類問題,並且可以自動處理詞袋,完成TFIDF矩陣的整理,完成模型訓練,最終實現類別預測。使用方法如下:

import nltk.classify as cf
import nltk.classify.util as cu
'''
train_data的格式不再是樣本矩陣,nltk要求的數據格式如下:
[ ({'How': 1, 'are': 1, 'you': 1}, 'ask'),
  ({'fine': 1, 'Thanks': 2}, 'answer') ]
'''
# 基於樸素貝葉斯分類器訓練測試數據 
model = cf.NaiveBayesClassifier.train(train_data)
ac = cu.accuracy(model, test_data)
print(ac)
pred = model.classify(test_data)

情感分析

分析語料庫中movie_reviews文檔,通過正面及負面評價進行自然語言訓練,實現情感分析。

import nltk.corpus as nc
import nltk.classify as cf
import nltk.classify.util as cu

# 存儲所有的正向樣本  
# pdata: [({單詞:true}, 'pos'),(),()...]
pdata = []
# pos文件夾中的每個文件的路徑
fileids = nc.movie_reviews.fileids('pos')
# print(len(fileids))
# 整理所有正面評論單詞,存入pdata列表
for fileid in fileids:
    sample = {}
    # words: 把當前文檔分詞處理
    words = nc.movie_reviews.words(fileid)
    for word in words:
        sample[word] = True
    pdata.append((sample, 'POSITIVE'))
# 整理所有反向樣本,存入ndata列表
ndata = []
fileids = nc.movie_reviews.fileids('neg')
for fileid in fileids:
    sample = {}
    words = nc.movie_reviews.words(fileid)
    for word in words:
        sample[word] = True
    ndata.append((sample, 'NEGATIVE'))

# 拆分測試集與訓練集數量(80%作爲訓練集)
pnumb, nnumb = int(0.8 * len(pdata)), int(0.8 * len(ndata))
train_data = pdata[:pnumb] + ndata[:nnumb]
test_data = pdata[pnumb:] + ndata[nnumb:]
# 基於樸素貝葉斯分類器訓練測試數據 
model = cf.NaiveBayesClassifier.train(train_data)
ac = cu.accuracy(model, test_data)
print(ac)

# 模擬業務場景
reviews = [
    'It is an amazing movie.',
    'This is a dull movie. I would never recommend it to anyone.',
    'The cinematography is pretty great in this movie.',
    'The direction was terrible and the story was all over the place.']
for review in reviews:
    sample = {}
    words = review.split()
    for word in words:
        sample[word] = True
    pcls = model.classify(sample)
    print(review, '->', pcls)

主題抽取

經過分詞、單詞清洗、詞幹提取後,基於TF-IDF算法可以抽取一段文本中的核心主題詞彙,從而判斷出當前文本的主題。屬於無監督學習。gensim模塊提供了主題抽取的常用工具 。

主題抽取相關API:

import gensim.models.ldamodel as gm
import gensim.corpora as gc
​
# 把lines_tokens中出現的單詞都存入gc提供的詞典對象,對每一個單詞做編碼。
line_tokens = ['hello', 'world', ...]
dic = gc.Dictionary(line_tokens)
# 通過字典構建詞袋
bow = dic.doc2bow(line_tokens) 
​
# 構建LDA模型
# bow: 詞袋
# num_topics: 分類數
# id2word: 詞典
# passes: 每個主題保留的最大主題詞個數
model = gm.LdaModel(bow, num_topics=n_topics, id2word=dic, passes=25)
# 輸出每個類別中對類別貢獻最大的4個主題詞
topics = model.print_topics(num_topics=n_topics, num_words=4)

案例:

import nltk.tokenize as tk
import nltk.corpus as nc
import nltk.stem.snowball as sb
import gensim.models.ldamodel as gm
import gensim.corpora as gc
doc = []
with open('../machine_learning_date/topic.txt', 'r') as f:
    for line in f.readlines():
        doc.append(line[:-1])
tokenizer = tk.WordPunctTokenizer() 
stopwords = nc.stopwords.words('english')
signs = [',', '.', '!']
stemmer = sb.SnowballStemmer('english')
lines_tokens = []
for line in doc:
    tokens = tokenizer.tokenize(line.lower())
    line_tokens = []
    for token in tokens:
        if token not in stopwords and token not in signs:
            token = stemmer.stem(token)
            line_tokens.append(token)
    lines_tokens.append(line_tokens)
# 把lines_tokens中出現的單詞都存入gc提供的詞典對象,對每一個單詞做編碼。
dic = gc.Dictionary(lines_tokens)
# 遍歷每一行,構建詞袋列表
bow = []
for line_tokens in lines_tokens:
    row = dic.doc2bow(line_tokens)
    bow.append(row)
n_topics = 2
# 通過詞袋、分類數、詞典、每個主題保留的最大主題詞個數構建LDA模型
model = gm.LdaModel(bow, num_topics=n_topics, id2word=dic, passes=25)
# 輸出每個類別中對類別貢獻最大的4個主題詞
topics = model.print_topics(num_topics=n_topics, num_words=4)
for label, words in topics:
    print(label, '->', words)
# 0 -> 0.022*"cryptographi" + 0.022*"use" + 0.022*"need" + 0.013*"cryptograph"
# 1 -> 0.046*"spaghetti" + 0.021*"made" + 0.021*"italian" + 0.015*"19th"

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章