點贊、關注再看,養成良好習慣
Life is short, U need Python
初學Python,快來點我吧
1. 概述
首先對數據缺失的原因、類型以及處理方法做一個簡單地總結,如下圖所示:
2. 直接刪除法
當缺失值的個數只佔整體很小一部分的時候,可直接刪除缺失值(行)。但是如果缺失值佔比比較大,這種直接刪除缺失值的處理方法就會丟失重要信息。
直接刪除法處理缺失值時,需要檢測樣本總體中缺失值的個數。Python中統計缺失值的方法如下(下面結合具體數據集,直接上代碼):
import numpy as np
import pandas as pd
data = pd.read_csv('1.csv') # 需要具體數據(公開的海藻數據集)請留言,並附上郵箱!
data.head()
null_all = data.isnull().sum() # 檢測缺失值個數(方法1)
null_all
data.info() # 檢測缺失值個數(方法2)
# new_data = data.dropna() # 1--刪除存在缺失值的行
# new_data = data.dropna(subset=['C1','Chla']) # 2--刪除指定列存在缺失值的行
new_data = data.dropna(thresh=15) # 3--刪除行屬性值不足k個的行(即刪除缺失元素比較多的行-->n-15)
new_data.info()
3. 前填充/後填充
import numpy as np
import pandas as pd
data = pd.read_csv('1.csv')
data[50:60] # 展示缺失值情況
data = data.fillna(method='ffill') # ffill---前填充;bfill--後填充
data[50:60]
4. 均值、衆數、中位數填充
通常可以根據樣本之間的相似性(中心趨勢)填補缺失值,通常使用能代表變量中心趨勢的值進行填補,代表變量中心趨勢的指標包括 平均值(mean)、中位數(median)、衆數(mode) 等,那麼我們採用哪些指標來填補缺失值呢?
(4.1)方法一(.fillna()
)
import numpy as np
import pandas as pd
data = pd.read_csv('1.csv')
data['C1'] = data['C1'].fillna(data['C1'].mean()) # 均值填充:.mean()--->.median()--->.mode()
data[50:60]
注:當使用衆數進行填充時,需特別注意衆數不存在或者多於一個的情況!
(4.2)方法二(SimpleImputer
)
SimpleImputer
提供了缺失數值處理的基本策略,比如使用缺失數值所在行或列的均值、中位數、衆數來替代缺失值。
import numpy as np
import pandas as pd
data = pd.read_csv('1.csv')
# from sklearn.preprocessing import Imputer # scikit-learn (較早版本)
from sklearn.impute import SimpleImputer # scikit-learn 0.22.2(最新版)
imputer = SimpleImputer(strategy='mean')
imputer = imputer.fit(data.iloc[:,3:].values)
imputer_data = pd.DataFrame(imputer.transform(data.iloc[:,3:].values),columns=data.columns[3:])
imputer_data[53:64]
5. 插值法
interpolate() 插值法,計算的是缺失值前一個值和後一個值的平均數。
import numpy as np
import pandas as pd
data = pd.read_csv('1.csv')
data['C1'] = data_5['C1'].interpolate()
data[53:63]
6. KNN填充(均值)
爲了實現KNN填充,我們先通過其他方法處理缺失值比較少的數據(因爲該方法必須藉助於其他非缺失數據尋找最鄰近的數據,然後進行加權平均求值填充的),得到如下特徵數據:
(6.1)from fancyimpute import KNN
首先需要安裝第三方包----fancyimpute,此包的安裝比較費勁(尤其在windows下,這裏的坑有點深啊)!
【1】包的下載(7包+1軟件)
- 鏈接:https://pan.baidu.com/s/1CUfiaEyE-k4G560L2JsOYQ
- 提取碼:nriv
【2】包的安裝(for windows)
- pip install D:\fancyimpute\包1
- pip install D:\fancyimpute\包2
- pip install D:\fancyimpute\包3
- pip install D:\fancyimpute\包4
- pip install D:\fancyimpute\包5
- pip install D:\fancyimpute\包6
- pip install D:\fancyimpute\fancyimpute-0.5.4.tar.gz
【3】可能會出現如下報錯
- ERROR: tensorflow 2.1.0 has requirement scipy==1.4.1; python_version >= “3”, but you’ll have scipy 1.1.0 which is incompatible.
- ERROR: tensorflow 2.1.0 has requirement six>=1.12.0, but you’ll have six 1.11.0 which is incompatible.
- ERROR: Cannot uninstall ‘wrapt’. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
【4】不要怕,繼續往下安裝
- pip install --upgrade scipy==1.4.1
- pip install --upgrade six==1.12.0
- pip install wrapt --ignore-installed
【5】哇塞,咋還有可能出錯
- ImportError: Could not find the DLL(s) ‘msvcp140_1.dll’
【6】沒辦法,繼續安裝
- 繼續安裝網盤下載中的vc_redist.x64.exe 就好了
哎,終於可以繼續寫代碼了!!!
data = pd.read_csv('1.csv')
# 插值填充缺失值少的特徵
data['mxPH'] = data['mxPH'].interpolate()
data['MNO2'] = data['MNO2'].interpolate()
data['NO3'] = data['NO3'].interpolate()
data['NH4'] = data['NH4'].interpolate()
data['Opo4'] = data['Opo4'].interpolate()
data['PO4'] = data['PO4'].interpolate()
data.info()
new_data = data.iloc[:,3:11]
new_data[53:64]
from fancyimpute import KNN # 事先安裝:fancyimpute
fill_knn = KNN(k=3).fit_transform(new_data)
new_data = pd.DataFrame(fill_knn,columns=data.columns[3:11])
new_data[53:64]
(6.2)from sklearn.neighbors import KNeighborsRegressor
data = pd.read_csv('1.csv')
# 插值填充缺失值少的特徵
data['mxPH'] = data['mxPH'].interpolate()
data['MNO2'] = data['MNO2'].interpolate()
data['NO3'] = data['mxPH'].interpolate()
data['NH4'] = data['MNO2'].interpolate()
data['Opo4'] = data['mxPH'].interpolate()
data['PO4'] = data['MNO2'].interpolate()
C1_data = data[['mxPH','MNO2', 'NO3', 'NH4', 'Opo4', 'PO4', 'C1']]
C1_data[53:64]
known_C1 = C1_data[C1_data.C1.notnull()]
unknown_C1 = C1_data[C1_data.C1.isnull()]、
import numpy as np
y = known_C1.iloc[:, 6]
y_train = np.array(y)
X = known_C1.iloc[:, :6]
X_train = np.array(X)
X_test = np.array(unknown_C1.iloc[:, :6])
y_test = np.array(unknown_C1.iloc[:, 6])
from sklearn.neighbors import KNeighborsRegressor
clf = KNeighborsRegressor(n_neighbors = 6, weights = "distance").fit(X_train,y_train)
y_test = clf.predict(X_test)
y_test
7. 隨機森林填充
上面囉嗦了很多了,直接上代碼吧!
data = pd.read_csv('1.csv')
data.mxPH = data.mxPH.fillna(data.mxPH.mean())
data.MNO2 = data.MNO2.fillna(data.MNO2.mean())
C1_data = data[['mxPH','MNO2', 'C1']]
C1_data[53:64]
known_C1 = C1_data[C1_data.C1.notnull()]
unknown_C1 = C1_data[C1_data.C1.isnull()]
import numpy as np
y = known_C1.iloc[:, 2]
y = np.array(y)
X = known_C1.iloc[:, :2]
X = np.array(X)
from sklearn.ensemble import RandomForestRegressor
rfr = RandomForestRegressor(random_state=0, n_estimators=200, n_jobs=-1)
rfr.fit(X, y)
data.loc[(data.C1.isnull()), 'C1'] = rfr.predict(unknown_C1.iloc[:, :2])
data[53:64]
8. 小結
- 暫時先寫到這吧,有點累了,休息!!(後續再繼續補充新方法)
- 類似的下一篇準備介紹一下數據特徵工程的一些方法,讓我們一起期待吧!!
- 寫作不易,切勿白剽
- 博友們的點贊和關注就是對博主堅持寫作的最大鼓勵
- 持續更新,未完待續…