《利用Python進行數據分析》第7章 軸連接與數據轉換

軸連接

另一種數據合併運算也被稱作連接(concatenation)、綁定(binding)或堆疊(stacking)。NumPy有一個用於合併原始NumPy數組的concatenation函數。

In [2]: import pandas as pd

In [3]: import numpy as np

In [4]: arr=np.arange(12).reshape((3,4))

In [5]: arr
Out[5]: 
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])

In [6]: np.concatenate([arr,arr],axis=1)
Out[6]: 
array([[ 0, 1, 2, 3, 0, 1, 2, 3],
[ 4, 5, 6, 7, 4, 5, 6, 7],
[ 8, 9, 10, 11, 8, 9, 10, 11]])

對於pandas對象(如Series和DataFrame),帶有標籤的軸使你能夠進一步推廣數組的連接運算。具體點說,你還需要考慮以下這些東西:
如果各對象其他軸上的索引不同,那些軸應該是做並集還是交集?
結果對象中的分組需要各不相同嗎?
用於連接的軸重要嗎?

假設有三個沒有重疊索引的Series

In [7]: s1=Series([0,1],index=['a','b'])

In [8]: s2 = Series([2, 3, 4], index=['c', 'd', 'e'])

In [9]: s3 = Series([5, 6], index=['f', 'g'])

對這些對象調用concat可以將值和索引粘合在一起

In [11]: pd.concat([s1, s2, s3])
Out[11]: 
a 0
b 1
c 2
d 3
e 4
f 5
g 6
dtype: int64

默認情況下,concat是在axis=0上工作的,最終產生一個新的Series。如果傳入axis=1,則結果就會變成一個DataFrame(axis=1是列)

In [12]: pd.concat([s1, s2, s3],axis=1)
Out[12]: 
0 1 2
a 0.0 NaN NaN
b 1.0 NaN NaN
c NaN 2.0 NaN
d NaN 3.0 NaN
e NaN 4.0 NaN
f NaN NaN 5.0
g NaN NaN 6.0

另外一條軸上沒有重疊,從索引的有序並集(外連接)上就可以看出來。傳入join=’inner’即可得到它們的交集

In [13]: s4=pd.concat([s1*5,s3])

In [14]: s4
Out[14]: 
a 0
b 5
f 5
g 6
dtype: int64

In [15]: pd.concat([s1,s4],axis=1)
Out[15]: 
0 1
a 0.0 0
b 1.0 5
f NaN 5
g NaN 6

In [16]: pd.concat([s1, s4], axis=1, join='inner')
Out[16]: 
0 1
a 0 0
b 1 5

可以通過join_axes指定要在其他軸上使用的索引

In [17]: pd.concat([s1, s4], axis=1, join_axes=[['a', 'c', 'b', 'e']])
Out[17]: 
0 1
a 0.0 0.0
c NaN NaN
b 1.0 5.0
e NaN NaN

參與連接的片段在結果中區分不開。假設你想要在連接軸上創建一個層次化索引。使用keys參數即可達到這個目的

In [18]: result = pd.concat([s1, s1, s3], keys=['one', 'two', 'three'])

In [19]: result
Out[19]: 
one a 0
b 1
two a 0
b 1
three f 5
g 6
dtype: int64

行轉換爲列

In [20]: result.unstack()
Out[20]: 
a b f g
one 0.0 1.0 NaN NaN
two 0.0 1.0 NaN NaN
three NaN NaN 5.0 6.0

沿着axis=1對Series進行合併,則keys就會成爲DataFrame的列頭

In [21]: pd.concat([s1, s2, s3], axis=1, keys=['one', 'two', 'three'])
Out[21]: 
one two three
a 0.0 NaN NaN
b 1.0 NaN NaN
c NaN 2.0 NaN
d NaN 3.0 NaN
e NaN 4.0 NaN
f NaN NaN 5.0
g NaN NaN 6.0

對DataFrame對象也是一樣

In [22]: df1 = DataFrame(np.arange(6).reshape(3, 2), index=['a', 'b', 'c'],
    ...: columns=['one', 'two'])

In [23]: df2 = DataFrame(5 + np.arange(4).reshape(2, 2), index=['a', 'c'],
    ...: columns=['three', 'four'])

In [24]: df1
Out[24]: 
one two
a 0 1
b 2 3
c 4 5

In [25]: df2
Out[25]: 
three four
a 5 6
c 7 8

In [26]: pd.concat([df1,df2])
Out[26]: 
four one three two
a NaN 0.0 NaN 1.0
b NaN 2.0 NaN 3.0
c NaN 4.0 NaN 5.0
a 6.0 NaN 5.0 NaN
c 8.0 NaN 7.0 NaN

In [28]: pd.concat([df1,df2],keys=['level1','level2'])
Out[28]: 
four one three two
level1 a NaN 0.0 NaN 1.0
b NaN 2.0 NaN 3.0
c NaN 4.0 NaN 5.0
level2 a 6.0 NaN 5.0 NaN
c 8.0 NaN 7.0 NaN

In [29]: pd.concat([df1, df2], axis=1, keys=['level1', 'level2'])
Out[29]: 
level1 level2 
one two three four
a 0 1 5.0 6.0
b 2 3 NaN NaN
c 4 5 7.0 8.0

如果傳入的不是列表而是一個字典,則字典的鍵就會被當做keys選項的值

In [30]: pd.concat({'level1': df1, 'level2': df2}, axis=1)
Out[30]: 
level1 level2 
one two three four
a 0 1 5.0 6.0
b 2 3 NaN NaN
c 4 5 7.0 8.0

還有兩個用於管理層次化索引創建方式的參數,見表所示

In [31]: pd.concat([df1, df2], axis=1, keys=['level1', 'level2'],
    ...: names=['upper', 'lower'])
Out[31]: 
upper level1 level2 
lower one two three four
a 0 1 5.0 6.0
b 2 3 NaN NaN
c 4 5 7.0 8.0

需要考慮的問題是,跟當前分析工作無關的DataFrame行索引

In [32]: df1 = DataFrame(np.random.randn(3, 4), columns=['a', 'b', 'c', 'd'])

In [33]: df2 = DataFrame(np.random.randn(2, 3), columns=['b', 'd', 'a'])

In [34]: df1
Out[34]: 
a b c d
0 1.617624 -1.218221 -0.426647 -1.251856
1 0.166891 0.723824 -0.528937 -1.023203
2 -1.687020 -1.998333 -0.112431 -0.231684

In [35]: df2
Out[35]: 
b d a
0 1.145881 -0.585634 1.664464
1 -1.461537 -0.121653 -0.120717

In [36]: pd.concat([df1, df2])
Out[36]: 
a b c d
0 1.617624 -1.218221 -0.426647 -1.251856
1 0.166891 0.723824 -0.528937 -1.023203
2 -1.687020 -1.998333 -0.112431 -0.231684
0 1.664464 1.145881 NaN -0.585634
1 -0.120717 -1.461537 NaN -0.121653

在這種情況下,傳入ignore_index=True即可

In [37]: pd.concat([df1, df2], ignore_index=True)
Out[37]: 
a b c d
0 1.617624 -1.218221 -0.426647 -1.251856
1 0.166891 0.723824 -0.528937 -1.023203
2 -1.687020 -1.998333 -0.112431 -0.231684
3 1.664464 1.145881 NaN -0.585634
4 -0.120717 -1.461537 NaN -0.121653

合併重疊數據

還有一種數據組合問題不能用簡單的合併(merge)或連接(concatenation)運算來處理。比如說,你可能有索引全部或部分重疊的兩個數據集。給這個例子增加一點啓發性,我們使用NumPy的where函數,它用於表達一種矢量化的if-else

In [38]: a = Series([np.nan, 2.5, np.nan, 3.5, 4.5, np.nan],
    ...: index=['f', 'e', 'd', 'c', 'b', 'a'])

In [39]: b = Series(np.arange(len(a), dtype=np.float64),
    ...: index=['f', 'e', 'd', 'c', 'b', 'a'])

In [40]: a
Out[40]: 
f NaN
e 2.5
d NaN
c 3.5
b 4.5
a NaN
dtype: float64

In [41]: b
Out[41]: 
f 0.0
e 1.0
d 2.0
c 3.0
b 4.0
a 5.0
dtype: float64

In [42]: b[-1]=np.nan

In [43]: b
Out[43]: 
f 0.0
e 1.0
d 2.0
c 3.0
b 4.0
a NaN
dtype: float64

In [44]: np.where(pd.isnull(a), b, a)
Out[44]: array([ 0. , 2.5, 2. , 3.5, 4.5, nan])

In [45]: b[:2]
Out[45]: 
f 0.0
e 1.0
dtype: float64

In [46]: b[:-2]
Out[46]: 
f 0.0
e 1.0
d 2.0
c 3.0
dtype: float64

In [47]: a[2:]
Out[47]: 
d NaN
c 3.5
b 4.5
a NaN
dtype: float64

Series有一個combine_first方法,實現的也是一樣的功能,而且會進行數據對齊

In [48]: b[:-2].combine_first(a[2:])
Out[48]: 
a NaN
b 4.5
c 3.0
d 2.0
e 1.0
f 0.0
dtype: float64

對於DataFrame,combine_first自然也會在列上做同樣的事情,因此你可以將其看做:用參數對象中的數據爲調用者對象的缺失數據“打補丁”

In [49]: df1 = DataFrame({'a': [1., np.nan, 5., np.nan],
    ...: 'b': [np.nan, 2., np.nan, 6.],
    ...: 'c': range(2, 18, 4)})

In [50]: df2 = DataFrame({'a': [5, 4, np.nan, 3, 7],
    ...: 'b': [np.nan, 3, 4, 6, 8]})

In [51]: df1
Out[51]: 
a b c
0 1.0 NaN 2
1 NaN 2.0 6
2 5.0 NaN 10
3 NaN 6.0 14

In [52]: df2
Out[52]: 
a b
0 5.0 NaN
1 4.0 3.0
2 NaN 4.0
3 3.0 6.0
4 7.0 8.0

In [53]: df1.combine_first(df2)
Out[53]: 
a b c
0 1.0 NaN 2.0
1 4.0 2.0 6.0
2 5.0 4.0 10.0
3 3.0 6.0 14.0
4 7.0 8.0 NaN

重塑和軸向旋轉

有許多用於重新排列表格型數據的基礎運算。這些函數也稱作重塑(reshape)或軸向旋轉(pivot)運算。

重塑層次化索引

層次化索引爲DataFrame數據的重排任務提供了一種具有良好一致性的方式。主要功能有二:
stack:將數據的列“旋轉”爲行。
unstack:將數據的行“旋轉”爲列。
看一個簡單的DataFrame,其中的行列索引均爲字符串

In [54]: data = DataFrame(np.arange(6).reshape((2, 3)),
    ...: index=pd.Index(['Ohio', 'Colorado'], name='state'),
    ...: columns=pd.Index(['one', 'two', 'three'], name='number'))

In [55]: data
Out[55]: 
number one two three
state 
Ohio 0 1 2
Colorado 3 4 5

使用該數據的stack方法即可將列轉換爲行,得到一個Series

In [56]: result=data.stack()

In [57]: result
Out[57]: 
state number
Ohio one 0
two 1
three 2
Colorado one 3
two 4
three 5
dtype: int32

對於一個層次化索引的Series,你可以用unstack將其重排爲一個DataFrame

In [58]: result.unstack()
Out[58]: 
number one two three
state 
Ohio 0 1 2
Colorado 3 4 5

默認情況下,unstack操作的是最內層(stack也是如此)。傳入分層級別的編號或名稱即可對其他級別進行unstack操作

In [59]: result.unstack(0)
Out[59]: 
state Ohio Colorado
number 
one 0 3
two 1 4
three 2 5

In [60]: result.unstack(1)
Out[60]: 
number one two three
state 
Ohio 0 1 2
Colorado 3 4 5

In [61]: result.unstack('state')
Out[61]: 
state Ohio Colorado
number 
one 0 3
two 1 4
three 2 5

如果不是所有的級別值都能在各分組中找到的話,則unstack操作可能會引入缺失數據

In [62]: s1 = Series([0, 1, 2, 3], index=['a', 'b', 'c', 'd'])

In [63]: s2 = Series([4, 5, 6], index=['c', 'd', 'e'])

In [64]: data2 = pd.concat([s1, s2], keys=['one', 'two'])

In [65]: data2
Out[65]: 
one a 0
b 1
c 2
d 3
two c 4
d 5
e 6
dtype: int64

In [66]: data2.unstack()
Out[66]: 
a b c d e
one 0.0 1.0 2.0 3.0 NaN
two NaN NaN 4.0 5.0 6.0

stack默認會濾除缺失數據,因此該運算是可逆的

In [67]: data2.unstack().stack()
Out[67]: 
one a 0.0
b 1.0
c 2.0
d 3.0
two c 4.0
d 5.0
e 6.0
dtype: float64
In [69]: data2.unstack().stack(dropna=False)
Out[69]: 
one a 0.0
b 1.0
c 2.0
d 3.0
e NaN
two a NaN
b NaN
c 4.0
d 5.0
e 6.0
dtype: float64

在對DataFrame進行unstack操作時,作爲旋轉軸的級別將會成爲結果中的最低級別

In [70]: df = DataFrame({'left': result, 'right': result + 5},
    ...: columns=pd.Index(['left', 'right'], name='side'))

In [71]: df
Out[71]: 
side left right
state number 
Ohio one 0 5
two 1 6
three 2 7
Colorado one 3 8
two 4 9
three 5 10

In [72]: df.unstack()
Out[72]: 
side left right 
number one two three one two three
state 
Ohio 0 1 2 5 6 7
Colorado 3 4 5 8 9 10

In [73]: df.unstack('state')
Out[73]: 
side left right 
state Ohio Colorado Ohio Colorado
number 
one 0 3 5 8
two 1 4 6 9
three 2 5 7 10

In [74]: df.unstack('state').stack()
Out[74]: 
side left right
number state 
one Ohio 0 5
Colorado 3 8
two Ohio 1 6
Colorado 4 9
three Ohio 2 7
Colorado 5 10

In [75]: df.unstack('state').stack('side')
Out[75]: 
state Colorado Ohio
number side 
one left 3 0
right 8 5
two left 4 1
right 9 6
three left 5 2
right 10 7

數據轉換

本章到目前爲止介紹的都是數據的重排。另一類重要操作則是過濾、清理以及其他的轉換工作。

移除重複數據

DataFrame中常常會出現重複行。下面就是一個例子

In [83]: data = DataFrame({'k1': ['one'] * 3 + ['two'] * 4,
    ...: 'k2': [1, 1, 2, 3, 3, 4, 4]})

In [84]: data
Out[84]: 
k1 k2
0 one 1
1 one 1
2 one 2
3 two 3
4 two 3
5 two 4
6 two 4

DataFrame的duplicated方法返回一個布爾型Series,表示各行是否是重複行

In [85]: data.duplicated()
Out[85]: 
0 False
1 True
2 False
3 False
4 True
5 False
6 True
dtype: bool

drop_duplicates方法,它用於返回一個移除了重複行的Data-Frame

In [87]: data.drop_duplicates()
Out[87]: 
k1 k2
0 one 1
2 one 2
3 two 3
5 two 4

這兩個方法默認會判斷全部列,你也可以指定部分列進行重複項判斷。假設你還有一列值,且只希望根據k1列過濾重複項

In [88]: data['v1'] = range(7)

In [89]: data['v1']
Out[89]: 
0 0
1 1
2 2
3 3
4 4
5 5
6 6
Name: v1, dtype: int32

In [90]: data.drop_duplicates(['k1'])
Out[90]: 
k1 k2 v1
0 one 1 0
3 two 3 3

In [91]: data.drop_duplicates(['k1', 'k2'])
Out[91]: 
k1 k2 v1
0 one 1 0
2 one 2 2
3 two 3 3
5 two 4 5

利用函數或映射進行數據轉換

在對數據集進行轉換時,你可能希望根據數組、Series或DataFrame列中的值來實現該轉換工作

In [4]: data=DataFrame({'food':['bacon', 'pulled pork', 'bacon', 'Pastrami',
   ...: 'corned beef', 'Bacon', 'pastrami', 'honey ham','nova lox'],
   ...: 'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})

In [5]: data
Out[5]: 
food ounces
0 bacon 4.0
1 pulled pork 3.0
2 bacon 12.0
3 Pastrami 6.0
4 corned beef 7.5
5 Bacon 8.0
6 pastrami 3.0
7 honey ham 5.0
8 nova lox 6.0

假設你想要添加一列表示該肉類食物來源的動物類型。我們先編寫一個肉類到動物的映射

In [6]: meat_to_animal = {'bacon': 'pig','pulled pork': 'pig','pastrami': 'cow',
   ...: 'corned beef': 'cow','honey ham': 'pig','nova lox': 'salmon'}

Series的map方法可以接受一個函數或含有映射關係的字典型對象,但是這裏有一個小問題,即有些肉類的首字母大寫了,而另一些則沒有。因此,我們還需要將各個值轉換爲小寫

In [7]: data['animal']=data['food'].map(str.lower).map(meat_to_animal)
In [8]: data
Out[8]: 
          food  ounces  animal
0        bacon     4.0     pig
1  pulled pork     3.0     pig
2        bacon    12.0     pig
3     Pastrami     6.0     cow
4  corned beef     7.5     cow
5        Bacon     8.0     pig
6     pastrami     3.0     cow
7    honey ham     5.0     pig
8     nova lox     6.0  salmon

可以傳入一個能夠完成全部這些工作的函數

In [9]: data['food'].map(lambda x: meat_to_animal[x.lower()])
Out[9]: 
0 pig
1 pig
2 pig
3 cow
4 cow
5 pig
6 cow
7 pig
8 salmon
Name: food, dtype: object

使用map是一種實現元素級轉換以及其他數據清理工作的便捷方式。

替換值

利用fillna方法填充缺失數據可以看做值替換的一種特殊情況。雖然前面提到的map可用於修改對象的數據子集,而replace則提供了一種實現該功能的更簡單、更靈活的方式。

In [10]: data=Series([3,-888,6,-888,-1000,5])
In [11]: data
Out[11]: 
0       3
1    -888
2       6
3    -888
4   -1000
5       5
dtype: int64

-888這個值可能是一個表示缺失數據的標記值。要將其替換爲pandas能夠理解的NA值,我們可以利用replace來產生一個新的Series:

In [12]: data.replace(-888,np.nan)
Out[12]: 
0 3.0
1 NaN
2 6.0
3 NaN
4 -1000.0
5 5.0
dtype: float64

一次性替換多個值,可以傳入一個由待替換值組成的列表以及一個替換值

In [13]: data.replace([-888,-1000],np.nan)
Out[13]: 
0 3.0
1 NaN
2 6.0
3 NaN
4 NaN
5 5.0
dtype: float64

對不同的值進行不同的替換,則傳入一個由替換關係組成的列表

In [14]: data.replace([-888,-1000],[np.nan,0])
Out[14]: 
0 3.0
1 NaN
2 6.0
3 NaN
4 0.0
5 5.0
dtype: float64

傳入的參數也可以是字典

In [16]: data.replace({-888:0,-1000:np.nan})
Out[16]: 
0 3.0
1 0.0
2 6.0
3 0.0
4 NaN
5 5.0
dtype: float64

重命名軸索引

跟Series中的值一樣,軸標籤也可以通過函數或映射進行轉換,從而得到一個新對象。軸還可以被就地修改,而無需新建一個數據結構

In [17]: data=DataFrame(np.arange(12).reshape((3,4)),
    ...: index=['Ohio', 'Colorado', 'New York'],
    ...: columns=['one', 'two', 'three', 'four'])

跟Series一樣,軸標籤也有一個map方法

In [18]: data
Out[18]: 
one two three four
Ohio 0 1 2 3
Colorado 4 5 6 7
New York 8 9 10 11

可以將其賦值給index,這樣就可以對DataFrame進行就地修改

In [19]: data.index.map(str.upper)
Out[19]: Index(['OHIO', 'COLORADO', 'NEW YORK'], dtype='object')

In [20]: data.index=data.index.map(str.upper)

In [21]: data
Out[21]: 
one two three four
OHIO 0 1 2 3
COLORADO 4 5 6 7
NEW YORK 8 9 10 11

想要創建數據集的轉換版(而不是修改原始數據),比較實用的方法是rename

In [22]: data.rename(index=str.title,columns=str.upper)
Out[22]: 
ONE TWO THREE FOUR
Ohio 0 1 2 3
Colorado 4 5 6 7
New York 8 9 10 11

rename可以結合字典型對象實現對部分軸標籤的更新

In [23]: data.rename(index={'OHIO':'INDIANA'},columns={'three':'peekaboo'})
Out[23]: 
one two peekaboo four
INDIANA 0 1 2 3
COLORADO 4 5 6 7
NEW YORK 8 9 10 11

rename幫我們實現了:複製DataFrame並對其索引和列標籤進行賦值。如果希望就地修改某個數據集,傳入inplace=True即可

In [24]: _ = data.rename(index={'OHIO': 'INDIANA'}, inplace=True)

In [25]: data
Out[25]: 
one two three four
INDIANA 0 1 2 3
COLORADO 4 5 6 7
NEW YORK 8 9 10 11

離散化和麪元劃分

爲了便於分析,連續數據常常被離散化或拆分爲“面元”(bin)。假設有一組人員數據,而你希望將它們劃分爲不同的年齡組

In [26]: ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]

In [27]: bins=[18,25,35,60,100]

In [28]: cats=pd.cut(ages,bins)

In [29]: cats
Out[29]: 
[(18, 25], (18, 25], (18, 25], (25, 35], (18, 25], ..., (25, 35], (60, 100], (35, 60], (35, 60], (25, 35]]
Length: 12
Categories (4, interval[int64]): [(18, 25] < (25, 35] < (35, 60] < (60, 100]]

pandas返回的是一個特殊的Categorical對象。你可以將其看做一組表示面元名稱的字符串。它含有一個表示一個爲年齡數據進行標號的labels屬性

In [30]: cats.labels
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: FutureWarning: 'labels' is deprecated. Use 'codes' instead
"""Entry point for launching an IPython kernel.
Out[30]: array([0, 0, 0, 1, 0, 0, 2, 1, 3, 2, 2, 1], dtype=int8)

In [31]: cats.codes
Out[31]: array([0, 0, 0, 1, 0, 0, 2, 1, 3, 2, 2, 1], dtype=int8)


In [35]: pd.value_counts(cats)
Out[35]: 
(18, 25] 5
(35, 60] 3
(25, 35] 3
(60, 100] 1
dtype: int64

跟“區間”的數學符號一樣,圓括號表示開端,而方括號則表示閉端(包括)。哪邊是閉端可以通過right=False進行修改。

In [36]: pd.cut(ages, [18, 26, 36, 61, 100], right=False)
Out[36]: 
[[18, 26), [18, 26), [18, 26), [26, 36), [18, 26), ..., [26, 36), [61, 100), [36, 61), [36, 61), [26, 36)]
Length: 12
Categories (4, interval[int64]): [[18, 26) < [26, 36) < [36, 61) < [61, 100)]

可以設置自己的面元名稱,將labels選項設置爲一個列表或數組即可

In [37]: group_names = ['Youth', 'YoungAdult', 'MiddleAged', 'Senior']

In [38]: pd.cut(ages, bins, labels=group_names)
Out[38]: 
[Youth, Youth, Youth, YoungAdult, Youth, ..., YoungAdult, Senior, MiddleAged, MiddleAged, YoungAdult]
Length: 12
Categories (4, object): [MiddleAged < Senior < YoungAdult < Youth]

如果向cut傳入的是面元的數量而不是確切的面元邊界,則它會根據數據的最小值和最大值計算等長面元。下面這個例子中,我們將一些均勻分佈的數據分成四組

In [39]: data = np.random.rand(20)

In [40]: pd.cut(data, 4, precision=2)
Out[40]: 
[(0.032, 0.26], (0.032, 0.26], (0.26, 0.49], (0.73, 0.96], (0.26, 0.49], ..., (0.032, 0.26], (0.49, 0.73], (0.032, 0.26], (0.49, 0.73], (0.49, 0.73]]
Length: 20
Categories (4, interval[float64]): [(0.032, 0.26] < (0.26, 0.49] < (0.49, 0.73] < (0.73, 0.96]]

qcut是一個非常類似於cut的函數,它可以根據樣本分位數對數據進行面元劃分。根據數據的分佈情況,cut可能無法使各個面元中含有相同數量的數據點。而qcut由於使用的是樣本分位數,因此可以得到大小基本相等的面元

In [41]: data = np.random.randn(1000) # 正態分佈

In [42]: cats = pd.qcut(data, 4) # 按四分位數進行切割

In [43]: cats
Out[43]: 
[(-0.674, 0.0129], (0.755, 2.802], (0.0129, 0.755], (0.0129, 0.755], (-3.259, -0.674], ..., (0.0129, 0.755], (-3.259, -0.674], (-0.674, 0.0129], (-3.259, -0.674], (-0.674, 0.0129]]
Length: 1000
Categories (4, interval[float64]): [(-3.259, -0.674] < (-0.674, 0.0129] < (0.0129, 0.755] < (0.755, 2.802]]

In [44]: pd.value_counts(cats)
Out[44]: 
(0.755, 2.802] 250
(0.0129, 0.755] 250
(-0.674, 0.0129] 250
(-3.259, -0.674] 250
dtype: int64

跟cut一樣,也可以設置自定義的分位數(0到1之間的數值,包含端點)

In [45]: pd.qcut(data, [0, 0.1, 0.5, 0.9, 1.])
Out[45]: 
[(-1.289, 0.0129], (1.296, 2.802], (0.0129, 1.296], (0.0129, 1.296], (-3.259, -1.289], ..., (0.0129, 1.296], (-3.259, -1.289], (-1.289, 0.0129], (-1.289, 0.0129], (-1.289, 0.0129]]
Length: 1000

Categories (4, interval[float64]): [(-3.259, -1.289] < (-1.289, 0.0129] < (0.0129, 1.296] < (1.296, 2.802]]

檢測和過濾異常值

異常值(outlier)的過濾或變換運算在很大程度上其實就是數組運算。來看一個含有正態分佈數據的DataFrame

In [46]: np.random.seed(12345)

In [47]: data = DataFrame(np.random.randn(1000, 4))

In [48]: data.describe()
Out[48]: 
0 1 2 3
count 1000.000000 1000.000000 1000.000000 1000.000000
mean -0.067684 0.067924 0.025598 -0.002298
std 0.998035 0.992106 1.006835 0.996794
min -3.428254 -3.548824 -3.184377 -3.745356
25% -0.774890 -0.591841 -0.641675 -0.644144
50% -0.116401 0.101143 0.002073 -0.013611
75% 0.616366 0.780282 0.680391 0.654328
max 3.366626 2.653656 3.260383 3.927528

想要找出某列中絕對值大小超過3的值,則這樣做

In [49]: col=data[3]
In [50]: col[np.abs(col)>3]
Out[50]: 
97 3.927528
305 -3.399312
400 -3.745356
Name: 3, dtype: float64

要選出全部含有“超過3或-3的值”的行,你可以利用布爾型DataFrame以及any方法

In [51]: data[(np.abs(data)>3).any(1)]
Out[51]: 
0 1 2 3
5 -0.539741 0.476985 3.248944 -1.021228
97 -0.774363 0.552936 0.106061 3.927528
102 -0.655054 -0.565230 3.176873 0.959533
305 -2.315555 0.457246 -0.025907 -3.399312
324 0.050188 1.951312 3.260383 0.963301
400 0.146326 0.508391 -0.196713 -3.745356
499 -0.293333 -0.242459 -3.056990 1.918403
523 -3.428254 -0.296336 -0.439938 -0.867165
586 0.275144 1.179227 -3.184377 1.369891
808 -0.362528 -3.548824 1.553205 -2.186301
900 3.366626 -2.372214 0.851010 1.332846

根據這些條件,即可輕鬆地對值進行設置。下面的代碼可以將值限制在區間-3到3以內

In [52]: data[np.abs(data)>3]=np.sign(data)*3

In [53]: data.describe()
Out[53]: 
0 1 2 3
count 1000.000000 1000.000000 1000.000000 1000.000000
mean -0.067623 0.068473 0.025153 -0.002081
std 0.995485 0.990253 1.003977 0.989736
min -3.000000 -3.000000 -3.000000 -3.000000
25% -0.774890 -0.591841 -0.641675 -0.644144
50% -0.116401 0.101143 0.002073 -0.013611
75% 0.616366 0.780282 0.680391 0.654328
max 3.000000 2.653656 3.000000 3.000000

np.sign這個ufunc返回的是一個由1和-1組成的數組,表示原始值的符號
排列和隨機採樣
利用numpy.random.permutation函數可以輕鬆實現對Series或DataFrame的列的排列工作(permuting,隨機重排序)。通過需要排列的軸的長度調用permutation,可產生一個表示新順序的整數數組:

In [54]: df = DataFrame(np.arange(5 * 4).reshape(5, 4))

In [55]: df
Out[55]: 
0 1 2 3
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
3 12 13 14 15
4 16 17 18 19

In [56]: sampler = np.random.permutation(5)

In [57]: sampler
Out[57]: array([1, 0, 2, 3, 4])

可以在基於ix的索引操作或take函數中使用該數組

In [58]: df.take(sampler)
Out[58]: 
0 1 2 3
1 4 5 6 7
0 0 1 2 3
2 8 9 10 11
3 12 13 14 15
4 16 17 18 19

如果不想用替換的方式選取隨機子集,則可以使用permutation:從permutation返回的數組中切下前k個元素,其中k爲期望的子集大小。

In [61]: df.take(np.random.permutation(len(df))[:3])
Out[61]: 
0 1 2 3
1 4 5 6 7
3 12 13 14 15
4 16 17 18 19

In [62]: df.take(np.random.permutation(len(df)))
Out[62]: 
0 1 2 3
1 4 5 6 7
3 12 13 14 15
0 0 1 2 3
2 8 9 10 11
4 16 17 18 19

In [63]: df.take(np.random.permutation(len(df))[:3])
Out[63]: 
0 1 2 3
1 4 5 6 7
0 0 1 2 3
4 16 17 18 19

要通過替換的方式產生樣本,最快的方式是通過np.random.randint得到一組隨機整數

In [64]: bag = np.array([5, 8, -1, 6, 2])

In [65]: bag
Out[65]: array([ 5, 8, -1, 6, 2])

In [66]: sampler = np.random.randint(0, len(bag), size=15)

In [67]: sampler
Out[67]: array([3, 0, 4, 1, 1, 2, 3, 0, 1, 2, 2, 3, 2, 1, 2])

In [68]: draws = bag.take(sampler)

In [69]: draws
Out[69]: array([ 6, 5, 2, 8, 8, -1, 6, 5, 8, -1, -1, 6, -1, 8, -1])

計算指標/啞變量

另一種常用於統計建模或機器學習的轉換方式是:將分類變量(categorical variable)轉換爲“啞變量矩陣”(dummy matrix)或“指標矩陣”(indicator matrix)。如果DataFrame的某一列中含有k個不同的值,則可以派生出一個k列矩陣或DataFrame(其值全爲1和0)。pandas有一個get_dummies函數可以實現該功能。

In [71]: df = DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'],
    ...: 'data1': range(6)})

In [72]: df
Out[72]: 
data1 key
0 0 b
1 1 b
2 2 a
3 3 c
4 4 a
5 5 b

In [73]: pd.get_dummies(df['key'])
Out[73]: 
a b c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0

有時候,你可能想給指標DataFrame的列加上一個前綴,以便能夠跟其他數據進行合併。get_dummies的prefix參數可以實現該功能:

In [75]: dummies = pd.get_dummies(df['key'], prefix='key')

In [76]: dummies
Out[76]: 
key_a key_b key_c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0

In [77]: df_with_dummy = df[['data1']].join(dummies)

In [78]: df_with_dummy
Out[78]: 
data1 key_a key_b key_c
0 0 0 1 0
1 1 0 1 0
2 2 1 0 0
3 3 0 0 1
4 4 1 0 0
5 5 0 1 0

接下來練習字符串操作。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章