繼續之前的寫。
三、對單個樣本進行分類。
'''
function: classify the input sample by voting from its K nearest neighbor
input:
1. the input feature vector
2. the feature matrix
3. the label list
4. the value of k
return: the result label
'''
def ClassifySampleByKNN(featureVectorIn, featureMatrix, labelList, kValue):
# calculate the distance between feature input vector and the feature matrix
disValArray = CalcEucDistance(featureVectorIn,featureMatrix)
# sort and return the index
theIndexListOfSortedDist = disValArray.argsort()
# consider the first k index, vote for the label
labelAndCount = {}
for i in range(kValue):
theLabelIndex = theIndexListOfSortedDist[i]
theLabel = labelList[theLabelIndex]
labelAndCount[theLabel] = labelAndCount.get(theLabel,0) + 1
sortedLabelAndCount = sorted(labelAndCount.iteritems(), key=lambda x:x[1], reverse=True)
return sortedLabelAndCount[0][0]
基本思路就是,首先計算輸入樣本和訓練樣本集合的歐氏距離,然後根據距離進行排序,選擇距離最小的k個樣本,用這些樣本對應的標籤進行投票,票數最多的標籤就是輸入樣本所對應的標籤。
比較有特色的寫法是這一句:
# sort and return the index
theIndexListOfSortedDist = disValArray.argsort()
disValArray是numpy的一維數組,存儲的僅僅是歐式距離的值。argsort直接對這些值進行排序,並且把排序結果所對應的原索引返回回來。很方便。另外一句是sorted函數的調用,按照value來對字典進行排序,用到了函數式編程的lambda表達式。這個用operator也能達到同樣的目的。
四、對測試樣本文件進行分類,並統計錯誤率
'''
function: classify the samples in test file by KNN algorithm
input:
1. the name of training sample file
2. the name of testing sample file
3. the K value for KNN
4. the name of log file
'''
def ClassifySampleFileByKNN(sampleFileNameForTrain, sampleFileNameForTest, kValue, logFileName):
logFile = open(logFileName,'w')
# load the feature matrix and normailize them
feaMatTrain, labelListTrain = LoadFeatureMatrixAndLabels(sampleFileNameForTrain)
norFeaMatTrain = AutoNormalizeFeatureMatrix(feaMatTrain)
feaMatTest, labelListTest = LoadFeatureMatrixAndLabels(sampleFileNameForTest)
norFeaMatTest = AutoNormalizeFeatureMatrix(feaMatTest)
# classify the test sample and write the result into log
errorNumber = 0.0
testSampleNum = norFeaMatTest.shape[0]
for i in range(testSampleNum):
label = ClassifySampleByKNN(norFeaMatTest[i,:],norFeaMatTrain,labelListTrain,kValue)
if label == labelListTest[i]:
logFile.write("%d:right\n"%i)
else:
logFile.write("%d:wrong\n"%i)
errorNumber += 1
errorRate = errorNumber / testSampleNum
logFile.write("the error rate: %f" %errorRate)
logFile.close()
return
代碼挺多,不過邏輯上就很簡單了。沒什麼好說的。另外,不知道python中的命名是什麼習慣?我發現如果完全把變量名字展開,太長了——我的macbook pro顯示起來太難看。這裏就沿用c/c++的變量簡寫命名方式了。
五、入口調用函數
類似c/c++的main函數。只要運行kNN.py這個腳本,就會先執行這一段代碼:
if __name__ == '__main__':
print "You are running KNN.py"
ClassifySampleFileByKNN('datingSetOne.txt','datingSetTwo.txt',3,'log.txt')
kNN中的k值我選擇的是3。
未完,待續。
如有轉載,請註明出處:http://blog.csdn.net/xceman1997/article/details/44994215