tensorflow基礎流程

引入包並查看版本號

import tensorflow as tf
import tensorflow.keras as keras
print(tf.__version__)
print(keras.__version__)
2.0.0
2.2.4-tf

引入數據集

如果下載不順利的話可以從 https://www.kaggle.com/vikramtiwari/mnist-numpy/data 手動下載好mnist文件,然後把它複製到keras/datasets這個文件夾下面。

mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
print(train_images.shape)
print(test_images.shape)
print(train_labels.shape)
print(test_labels.shape)
(60000, 28, 28)
(10000, 28, 28)
(60000,)
(10000,)

搭建模型進行預測

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

最下面輸出的兩個數字分別代表損失函數和準確率,要是能順便把accu和loss的值畫成曲線圖就更好啦。乍看起來tf比pytorch更簡單,可以少寫很多代碼,但其實高度封裝的後果就是你不需要操心底層的實現,沒辦法把學到的原理落實到每個細節,頭重腳輕,如果涉及到不能採取拿來主義的地方就很難辦。另外一點就是你很難訪問計算過程中的中間結果,就好像做飯時只能拿着大刀砍骨頭,不能用小刀雕蘿蔔。

model.fit(train_images, train_labels, epochs=5)
model.evaluate(test_images, test_labels, verbose=2)
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 6s 93us/sample - loss: 2.5732 - accuracy: 0.7574
Epoch 2/5
60000/60000 [==============================] - 5s 84us/sample - loss: 0.6132 - accuracy: 0.8468
Epoch 3/5
60000/60000 [==============================] - 5s 81us/sample - loss: 0.4904 - accuracy: 0.8767
Epoch 4/5
60000/60000 [==============================] - 4s 59us/sample - loss: 0.4061 - accuracy: 0.8967
Epoch 5/5
60000/60000 [==============================] - 4s 59us/sample - loss: 0.3761 - accuracy: 0.9045
10000/1 - 0s - loss: 0.1850 - accuracy: 0.9370





[0.2977512352705002, 0.937]
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章