支持向量機(SVM)筆記

SVM

1.概述

SVM全稱Support_Vector_Machine,即支持向量機,是機器學習中的一種監督學習分類算法,一般用於二分類問題。對於線性可分的二分類問題,SVM可以直接求解,對於非線性可分問題,其也可以通過核函數將低維映射到高維空間從而轉變爲線性可分。對於多分類問題,SVM經過適當的轉換,也能加以解決。相對於傳統的分類算法如logistic迴歸,k近鄰法,決策樹,感知機,高斯判別分析法(GDA)等,SVM尤其獨到的優勢。相對於神經網絡複雜的訓練計算量,SVM在訓練方面較少計算量的同時也能得到很好的訓練效果。

2.問題的提出

  • 考慮一個線性可分的二分類問題

    • m個訓練樣本x 是特徵向量,y 是目標變量

      {x(i),y(i)},x(i)Rn,y(i){1,1},i=1,2,,m
      決策函數:hw,b(x)=g(wTx+b)g(z)={1,ifx>00,ifx<0
      這裏寫圖片描述
      直線代表 wTx+b=0

    • 首先定義一些符號

      • functional margin(函數邊界)

        r^=min{r^(i)},i=1,2,,m;r^(i)=y(i)(wTx(i)+b)

      • geometrical margin(幾何邊界)

        r=min{r(i)},i=1,2,,m;r(i)=y(i)(wTx(i)+b)w

      • 符號解釋:

        • 函數邊界:由於y(i) 只能取1,1 ,所以當wTx(i)+b>>0 時,y=1y=1 分別表示點分佈在距離超平面wTx+b=0 兩邊很遠的地方,注意如果加倍wx ,函數邊界是會加倍的
    • 目標:幾何邊界最大,即

      max{r}

3.問題的轉化

  • 依次轉化:

    • max{r}
    • max{min{r(i)=y(i)(wTx(i)+b)w};i=1,2,,m}
    • max{r}s.t.y(i)(wTx(i)+b)wr
    • {max{r^w}s.t.y(i)(wTx(i)+b)r^
    • 注意函數邊界的改變不影響優化問題的求解結果

    letr^=1
    問題轉化爲:
    {max{1w}s.t.y(i)(wTx(i)+b)1
    最終轉化爲optimization problem,而且目標函數是convex的,即凸函數
    {min{12w2}s.t.y(i)(wTx(i)+b)1(1)

4.問題求解

(1)可以用通常的QP(二次規劃)方法求解,matlab或lingo都有相應工具箱。
(2)既然本文叫SVM,當然會用到不同的解法,而且SVM的解法在訓練集很大的時候,比一般的QP解法效率高。

  • 廣義拉格朗日數乘法

    對於3中得到的優化問題(1)有:
    {L(w,b,α)=12w2mi=1α(i)[y(i)(wTx(i)+b)1]α(i)0

    • 滿足約束條件y(i)(wTx(i)+b)1 下有:
      max{L(w,b,α)}=1w2=f(w)
  • 優化問題變爲:

    minw,b{maxα{L(w,b,α)}}s.t.y(i)(wTx(i)+b)1α(i)0

  • 在滿足KKT條件下有(對偶優化問題)

    • minw,b{maxα{L(w,b,α)}}=maxα{minw,b{L(w,b,α)}}

      通常對偶問題(dual problem)max{min{f(w,α)}} 比原始問題(primal problem)min{max{f(w,α)}} 更容易求解,尤其是在訓練樣本數量很大的情況下,KKT條件又稱爲互補鬆弛條件

    • w,bL(w¯,b¯,α¯)=0;

      w¯,b¯primaloptimal;α¯dualoptimal

    • α¯(i)gi(w¯,b¯)=0

      y(i)(wTx(i)+b)=1 時,通常有α0 ,這些點稱爲Support Vector,即支持向量 y(i)(wTx(i)+b)>1 時,有α=0 ,通常大多數α 爲0,減少了計算量

  • 解決minw,b{L(w,b,α)}

    求偏導令爲0可得{w=mi=1α(i)y(i)x(i)mi=1α(i)y(i)=0

  • 帶入原式:

    maxα{mi=1α(i)12mi,j=1y(i)y(j)α(i)α(j)<x(i),x(j)>}α(i)0mi=1α(i)y(i)=0

    • 求得α 則可得到w,b
    • 目標表示爲 wTx+b=mi=1α(i)y(i)<x(i),x>+b
    • kernel(xy)=<xT,y> 稱爲核函數,能較少高維空間計算量,通常知道了核函數,計算量相對於找對應的x,y 向量小得多,而且若x,y 是無限維向量,也可通過核函數映射。常用的核函數有:
      • 高斯核K(x,z)=exp(zx2σ2)
      • 多項式核K(x,z)=(xz)a

5.問題的優化

  • 4中推導出了求α 使得最大化的問題。但其存在一定問題。
    這裏寫圖片描述

    當訓練集如右圖分佈在超平面兩側時,結果並不好,因此我們可以給r^=1 添加鬆弛條件,允許少數點小於1,甚至分類到錯誤的一面

  • 我們修改限制條件,並修改目標函數

    min{12w2+csummi=1ξi}y(i)(wTx(i)+b)1ξiξi0

  • 通過類似的對偶問題的求解,我們得到

    W=maxα{mi=1α(i)12mi,j=1y(i)y(j)α(i)α(j)<x(i),x(j)>}0αcmi=1α(i)y(i)=0

6.優化後問題的求解

  • 座標上升法求解最大值
 #僞代碼
        loop {
          for i in range(m):
              alpha(i):=alpha(i) which let {w} maximum
      }
  • 座標上升與梯度上升的對比圖
    這裏寫圖片描述

  • SMO

 #僞代碼
      L<=alpha<=H
      loop {
          for i,j in range(m):
              alpha(i):=min{ (alpha(i) or L or H ) which let {w} maximum }
              alpha(j):=min{ (alpha(j) or L or H ) which let {w} maximum }
      }

7.實戰

  • trainsets 總共90組

    -0.017612 14.0530640
    -1.395634 4.6625411
    -0.752157 6.5386200
    -1.322371 7.1528530
    …………………………
    -1.076637 -3.1818881
    1.821096 10.2839900
    3.010150 8.4017661
    -1.099458 1.6882741
    -0.834872 -1.7338691
    -0.846637 3.8490751
    1.400102 12.6287810
    1.752842 5.4681661
    0.078557 0.0597361

  • testsets 總共10組

    0.089392 -0.7153001
    1.825662 12.6938080
    0.197445 9.7446380
    0.126117 0.9223111
    -0.679797 1.2205301
    0.677983 2.5566661
    0.761349 10.6938620
    -2.168791 0.1436321
    1.388610 9.3419970
    0.317029 14.7390250

  • logistic迴歸效果

    • 權值weight=[[11.93391219][1.12324688][1.60965531]]
    • 原始測試文件真值y=[1.0,0.0,0.0,1.0,1.0,1.0,0.0,1.0,0.0,0.0]
    • logistic迴歸預測值:y1=[1.0,0.0,0.0,1.0,1.0,1.0,0.0,1.0,0.0,0.0]
    • 正確率還是蠻高的
    • 附上代碼:
#!/usr/bin/env
#coding:utf-8
import numpy
import sys
from matplotlib import pyplot
import random

def makedata(filename):
    try:
        f = open(filename,"r")
        lines = f.readlines()
        datalist = []
        datalist = [i.split() for i in lines ]
        datalist = [ [ float(i) for i in line] for line in datalist ]
        for i in range(len(datalist)):
            datalist[i].insert(0,1.0)
    except:
        return
    finally:
        return datalist
        f.close()

def makedat(filename):
    try:
        f = open(filename,"r")
        lines = f.readlines()
        datalist = []
        datalist = [i.split() for i in lines ]
        datalist = [ [ float(i) for i in line] for line in datalist ]
        x = [ line[0:len(line)-1] for line in datalist ]
        y = [ line[-1] for line in datalist ]
    except:
        return
    finally:
        return x,y
        f.close()

def sigma(z):
    return 1.0/(1+numpy.exp(-z))

#batch regression
def logisticFunc(dataset,itertimes,alpha):
    weight = numpy.ones((len(dataset[0])-1,1))   
    value = [ int(i[-1]) for i in dataset ]
    value = numpy.mat(value).transpose()
    params = [ i[0:-1] for i in dataset ]
    params = numpy.mat(params)
    for i in range(int(itertimes)):
        error = value-sigma(params*weight)
        weight = weight+alpha*params.transpose()*error
    return weight

#random grad ascend regression 
def randLogisticFunc(dataset,itertimes,alpha):
    weight = numpy.ones((len(dataset[0])-1,1))   
    value = [ int(i[-1]) for i in dataset ]
    value = numpy.mat(value).transpose()
    params = [ i[0:-1] for i in dataset ]
    params = numpy.mat(params)
    for i in range(int(itertimes)):
        randid = random.randint(0,len(dataset)-1)
        error = value[randid]-sigma(params[randid]*weight)
        weight = weight+alpha*params[randid].transpose()*error
    return weight


def plot(data,weight):
    x1 = []
    x2 = []
    y1 = []
    y2 = []
    for i in data:
        if i[-1] == 1:
            x1.append(i[1])
            y1.append(i[2])
        else:
            x2.append(i[1])
            y2.append(i[2])
    x = numpy.linspace(-3,3,1000)
    weight = numpy.array(weight)
    y = (-weight[0][0]-weight[1][0]*x)/weight[2][0]
    fg = pyplot.figure()
    sp = fg.add_subplot(111)
    sp.scatter(x1,y1,s=30,c="red")
    sp.scatter(x2,y2,s=30,c="blue")
    sp.plot(x,y)
    pyplot.show()

def predict(weight,x1):
    yi = []
    for i in x1:
        if weight[0][0]+i[0]*weight[1][0]+i[1]*weight[2][0]>=0:
            yi.append(1)
        else:
            yi.append(0)
    print yi


def main():
    trainfile = sys.argv[1]
    itertimes = int(sys.argv[2])
    alpha = float(sys.argv[3])
    testfile = sys.argv[4]
    data = makedata(trainfile)
    testx,testy = makedat(testfile)
    weight = logisticFunc(data,itertimes,alpha)
    print weight
    predict(weight,testx)
    print testy
    #weight = randLogisticFunc(data,itertimes,alpha)
    #print weight
    plot(data,weight)
if __name__=='__main__':
    main()
  • SVM效果(採用高斯核,使用sklearn庫)
    • 原始測試文件真值y=[1.0,0.0,0.0,1.0,1.0,1.0,0.0,1.0,0.0,0.0]
    • svm預測值:y1=array([1.,0.,0.,1.,1.,1.,0.,1.,0.,0.])
    • 正確率也挺高的
    • 附上代碼:
#!/usr/bin/env python
#coding:utf-8
from sklearn import svm
import sys
def makedata(filename):
    try:
        f = open(filename,"r")
        lines = f.readlines()
        datalist = []
        datalist = [i.split() for i in lines ]
        datalist = [ [ float(i) for i in line] for line in datalist ]
        x = [ line[0:len(line)-1] for line in datalist ]
        y = [ line[-1] for line in datalist ]
    except:
        return
    finally:
        return x,y
        f.close()
def learn(x,y):
    clf = svm.SVC()
    clf.fit(x,y)
    return clf
def predict(x1,y1,clf):
    print "svm fit results",clf.predict(x1)
    print "original test file results",y1
if __name__=="__main__":
    inputfile = sys.argv[1]
    testfile = sys.argv[2]
    x,y = makedata(inputfile)
    x1,y1 = makedata(testfile)
    clf = learn(x,y)
    predict(x1, y1, clf)
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章