tensorflow2.0筆記13:Keras高層API!

Keras高層API!

一、Keras高層API-1

1.1、五大功能

1.2、這裏主要講解Metrics

  • 有一個現成的準確度的meter就是 metrics.Accuracy()metrics.Accuracy()
  • 如果只是簡單的求一個平均值的話,有一個更加通用的meter就是 metrics.Mean()metrics.Mean()

1.3、1.2中的實戰

import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics


def preprocess(x, y):
    x = tf.cast(x, dtype=tf.float32) / 255.
    y = tf.cast(y, dtype=tf.int32)

    return x, y


batchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())

db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)

ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz)

network = Sequential([layers.Dense(256, activation='relu'),
                      layers.Dense(128, activation='relu'),
                      layers.Dense(64, activation='relu'),
                      layers.Dense(32, activation='relu'),
                      layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))
network.summary()

optimizer = optimizers.Adam(lr=0.01)


# 第一步: 這裏要對loss和accuracy做一個跟蹤。所以這裏建立了2個metrics
# 一個是accuracy的metrics,一個是求loss均值的metrics.
acc_meter = metrics.Accuracy()
loss_meter = metrics.Mean()

for step, (x, y) in enumerate(db):

    with tf.GradientTape() as tape:
        # [b, 28, 28] => [b, 784]
        x = tf.reshape(x, (-1, 28 * 28))
        # [b, 784] => [b, 10]
        out = network(x)
        # [b] => [b, 10]
        y_onehot = tf.one_hot(y, depth=10)
        # [b]
        loss = tf.reduce_mean(tf.losses.categorical_crossentropy(y_onehot, out, from_logits=True))


        # 第二步: 每次loss計算完之後會更新一次metrics列表,這樣loss會非常的準確。
        loss_meter.update_state(loss)

    grads = tape.gradient(loss, network.trainable_variables)
    optimizer.apply_gradients(zip(grads, network.trainable_variables))

    if step % 100 == 0:

        # 第三步: 測試的時候把loss的result打印出來。
        print(step, 'loss:', loss_meter.result().numpy())

        # 第四步: 把當前的loss buffer緩存清理掉。======這樣每隔100次打印出來的loss是前100次的平均loss,而不是第100次了。
        # 數值會看起來非常的穩定。
        loss_meter.reset_states()

    # evaluate 測試的時候。我們來看acc metrics
    if step % 500 == 0:
        total, total_correct = 0., 0

        # 首先: acc_meter緩存清0。
        acc_meter.reset_states()

        for step, (x, y) in enumerate(ds_val):
            # [b, 28, 28] => [b, 784]
            x = tf.reshape(x, (-1, 28 * 28))
            # [b, 784] => [b, 10]
            out = network(x)

            # [b, 10] => [b]
            pred = tf.argmax(out, axis=1)
            pred = tf.cast(pred, dtype=tf.int32)
            # bool type
            correct = tf.equal(pred, y)
            # bool tensor => int tensor => numpy
            total_correct += tf.reduce_sum(tf.cast(correct, dtype=tf.int32)).numpy()
            total += x.shape[0]

            # 然後: acc_meter的值更新緩存到列表。
            acc_meter.update_state(y, pred)

        print(step, 'Evaluate Acc:', total_correct / total, acc_meter.result().numpy())

  • 運行結果:
  • 需要注意的是:這裏我們不僅使用了acc_meter方法,我們自己實現了類型acc_meter的方法,怎麼實現呢,我們有這樣的一個變量叫做total, total_correct。總的樣本的數量,和總的正確的數量。
C:\Anaconda3\envs\tf2\python.exe E:/Codes/Demo/TF2/metrics.py
datasets: (60000, 28, 28) (60000,) 0 255
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                multiple                  200960    
_________________________________________________________________
dense_1 (Dense)              multiple                  32896     
_________________________________________________________________
dense_2 (Dense)              multiple                  8256      
_________________________________________________________________
dense_3 (Dense)              multiple                  2080      
_________________________________________________________________
dense_4 (Dense)              multiple                  330       
=================================================================
Total params: 244,522
Trainable params: 244,522
Non-trainable params: 0
_________________________________________________________________
0 loss: 2.351126
78 Evaluate Acc: 0.2671 0.2671
100 loss: 0.50758445
200 loss: 0.25146392
300 loss: 0.19939858
400 loss: 0.19180286
500 loss: 0.15045771
78 Evaluate Acc: 0.9591 0.9591
600 loss: 0.1392191
700 loss: 0.13043576
800 loss: 0.13935085
900 loss: 0.12730792
1000 loss: 0.119043715
78 Evaluate Acc: 0.9707 0.9707
1100 loss: 0.10553091
1200 loss: 0.10021621
1300 loss: 0.111887835
1400 loss: 0.10525742
1500 loss: 0.10338638
78 Evaluate Acc: 0.9668 0.9668
1600 loss: 0.09393982
1700 loss: 0.10706411
1800 loss: 0.0876565
1900 loss: 0.09356122
2000 loss: 0.07625327
78 Evaluate Acc: 0.969 0.969
2100 loss: 0.08937727
2200 loss: 0.08263406
2300 loss: 0.104584485
2400 loss: 0.10313261
2500 loss: 0.094911754
78 Evaluate Acc: 0.9671 0.9671
2600 loss: 0.07035615
2700 loss: 0.08280234
2800 loss: 0.0859525
2900 loss: 0.065915905
3000 loss: 0.06708269
78 Evaluate Acc: 0.9739 0.9739
3100 loss: 0.06600948
3200 loss: 0.084229834
3300 loss: 0.0853124
3400 loss: 0.064022705
3500 loss: 0.0710441
78 Evaluate Acc: 0.9659 0.9659
3600 loss: 0.07671407
3700 loss: 0.08920249
3800 loss: 0.05802461
3900 loss: 0.061849356
4000 loss: 0.071581885
78 Evaluate Acc: 0.9711 0.9711
4100 loss: 0.071715534
4200 loss: 0.06235297
4300 loss: 0.06333204
4400 loss: 0.07377879
4500 loss: 0.06499765
78 Evaluate Acc: 0.9749 0.9749
4600 loss: 0.067099705

Process finished with exit code 0

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章