Python API for CNTK
參看官網:https://www.cntk.ai/pythondocs/gettingstarted.html
1.輸出cntk的版本號:
>>> import cntk
>>> cntk.__version__
'2.0rc2'
2.兩個數組相減求值 ; evaluation評估,求值
>>> cntk.minus([1, 2, 3], [4, 5, 6]).eval()
array([-3., -3., -3.], dtype=float32)
3.計算 (2-4)**2 + (1-6)**2
>>> import numpy as np
>>> x = cntk.input(2)
>>> y = cntk.input(2)
>>> x0 = np.asarray([[2., 1.]], dtype=np.float32)
>>> y0 = np.asarray([[4., 6.]], dtype=np.float32)
>>> cntk.squared_error(x, y).eval({x:x0, y:y0})
array([ 29.], dtype=float32)
4.asarray() 方法返回NumPy接口:
>>> import cntk as C
>>> c = C.constant(3, shape=(2,3))
>>> c.asarray()
array([[ 3., 3., 3.],
[ 3., 3., 3.]], dtype=float32)
>>> np.ones_like(c.asarray())
array([[ 1., 1., 1.],
[ 1., 1., 1.]], dtype=float32)
5.綜述和第一次運行
CNTK2是之前版本的大修改,使得它現在有強大的數據控制能力,如何讀入數據,訓練,測試和minibatch構建;
Python相關的提供直接構建網絡圖和數據讀入前的操作不僅僅是爲了更強大和更復雜的網絡,也是爲了當一個模型被創建和被調試時候的Python交互;
CNTK2包含一個準備繼續擴展的例子庫和層庫(a layers libary)。允許簡單的通過組合各個塊,構建一個強大的深度網絡。
例如:CNN,RNN(LSTMs),FCN.
第一次基本的使用例子:一個標準的FCN (a standardfully connected deep network)
在CNTK上訓練或者運行一個網絡,第一步需要決定運行在哪一個設備上,如果你有可用的GPU,訓練時間將會極大的被改進。爲了明確使用設備,設置使用設備如下:
from cntk.device import set_default_device, gpu
set_default_device(gpu(0))
利用全連接層FCN訓練一個分類器的網絡例子
用函數:Sequential()和Dense()
網絡結構:
a 2-layer fully connected
deep neural network with 50 hidden dimensions per layer。
ce
is
the cross entropy which defined our model’s loss function
pe
is
the classification error.
代碼如下;
#encoding=utf-8
from __future__ import print_function
import numpy as np
import cntk as C
from cntk.learners import sgd, learning_rate_schedule, UnitType
from cntk.logging import ProgressPrinter
from cntk.layers import Dense, Sequential
def generate_random_data(sample_size, feature_dim, num_classes):
# Create synthetic data using NumPy.
Y = np.random.randint(size=(sample_size, 1), low=0, high=num_classes)
# Make sure that the data is separable
X = (np.random.randn(sample_size, feature_dim) + 3) * (Y + 1)
X = X.astype(np.float32)
# converting class 0 into the vector "1 0 0",
# class 1 into vector "0 1 0", ...
class_ind = [Y == class_number for class_number in range(num_classes)]
Y = np.asarray(np.hstack(class_ind), dtype=np.float32)
return X, Y
def ffnet():
inputs = 2
outputs = 2
layers = 2
hidden_dimension = 50
#設置模型
# input variables denoting the features and label data
features = C.input((inputs), np.float32)
label = C.input((outputs), np.float32)
# Instantiate the feedforward classification model
my_model = Sequential ([
Dense(hidden_dimension, activation=C.sigmoid),
Dense(outputs)])
z = my_model(features)
ce = C.cross_entropy_with_softmax(z, label)
pe = C.classification_error(z, label)
#初始化訓練器
# Instantiate the trainer object to drive the model training
lr_per_minibatch = learning_rate_schedule(0.125, UnitType.minibatch)
progress_printer = ProgressPrinter(0)
trainer = C.Trainer(z, (ce, pe), [sgd(z.parameters, lr=lr_per_minibatch)], [progress_printer])
# Get minibatches of training data and perform model training
minibatch_size = 25
num_minibatches_to_train = 1024
aggregate_loss = 0.0
for i in range(num_minibatches_to_train):
train_features, labels = generate_random_data(minibatch_size, inputs, outputs)
#訓練
# Specify the mapping of input variables in the model to actual minibatch data to be trained with
trainer.train_minibatch({features : train_features, label : labels})
sample_count = trainer.previous_minibatch_sample_count
aggregate_loss += trainer.previous_minibatch_loss_average * sample_count
last_avg_error = aggregate_loss / trainer.total_number_of_samples_seen
test_features, test_labels = generate_random_data(minibatch_size, inputs, outputs)
avg_error = trainer.test_minibatch({features : test_features, label : test_labels})
print(' error rate on an unseen minibatch: {}'.format(avg_error))
return last_avg_error, avg_error
if __name__=='__main__':
np.random.seed(98052)
ffnet()
命令行下運行:Python simplenet.py