深度學習第四周--第二課keras、殘差網絡搭建

聲明

本文參考何寬

前言

本文的結構:
1、學習一個高級的神經網絡的框架,能夠運行在包括TensorFlow和CNTK的幾個較低級別的框架之上的框架—keras。
2、使用殘差網絡構建一個非常深的卷積網絡。

一、keras入門–笑臉識別

1.1、爲何使用keras框架

爲了使深度學習工程師能夠很快地建立和實驗不同的模型,keras是相比tensorflow和python更高層次的框架,並提供了額外的抽象方法,最關鍵的是keras能夠以最短的時間讓想法變位現實。但是,keras比底層框架更具有限制性,所以有一些非常複雜的模型可以在TensorFlow中實現,但在keras中卻沒有,但keras對許多常見模型都能正常運行。

1.2、任務描述

建立一個算法,它使用來自前門攝像頭的圖片來檢查這個人是否快樂,只有在人高興的時候,門纔會打開。

import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
import kt_utils 

import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow

加載數據集:

X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = kt_utils.load_dataset()
X_train  = X_train_orig/255
X_test  = X_test_orig/255

Y_train = Y_train_orig.T
Y_test = Y_test_orig.T

print(X_train.shape[0],X_test.shape[0],X_train.shape,Y_train.shape,X_test.shape,Y_test.shape)

結果:

600 150 (600, 64, 64, 3) (600, 1) (150, 64, 64, 3) (150, 1)

1.3、使用keras框架構建模型

這是模型的例子:

def model(input_shape):
    """
    模型大綱
    """
    
    X_input = Input(input_shape)
    
    X = ZeroPadding2D((3,3))(X_input)
    
    X = Conv2D(32,(7,7),strides = (1,1),name='conv0')(X)
    X = BatchNormalization(axis = 3,name='bn0')(X)
    X = Activation('relu')(X)
    
    X = MaxPooling2D((2,2),name="max_pool")(X)
    
    X = Flatten()(X)
    X = Dense(1,activation='sigmoid',name='fc')(X)
    
    model = Model(inputs = X_input,outputs = X,name='HappyModel')
    
    return model

**注意:**keras框架使用的變量名和我們之前使用的numpy和TensorFlow變量不一樣。它不是在前向傳播的每一步上創建新變量(比如X,Z1,A1,Z2,A2…)以便於不同層之間的計算。在keras中,我們使用X覆蓋了所有的值,沒有保存每一層結果,只需要最新的值,唯一例外的就是X_input,我們將它分離出來是因爲它是輸入的數據,我們要在最後的創建模型那一步中用到。

def HappyModel(input_shape):
    """
    實現一個檢測笑容的模型
    
    參數:
        input_shape - 輸入的數據的維度
    返回:
        model - 創建的keras模型
    """
    X_input = Input(input_shape)
    
    X = ZeroPadding2D((3,3))(X_input)
    
    X = Conv2D(32,(7,7),strides = (1,1),name='conv0')(X)
    X = BatchNormalization(axis = 3,name='bn0')(X)
    X = Activation('relu')(X)
    
    X = MaxPooling2D((2,2),name="max_pool")(X)
    
    X = Flatten()(X)
    X = Dense(1,activation='sigmoid',name='fc')(X)
    
    model = Model(inputs = X_input,outputs = X,name='HappyModel')
    
    return model

現在已經設計好模型了,要訓練並測試模型我們需要這麼做:

  • 創建一個模型實體
  • 編譯模型,可以使用這個語句:model.compile(optimizer = '...',loss='...',metrucs=['accuracy'])
  • 訓練模型:model.fit(x=...,y=...,epochs=...,batch_size=...)
  • 評估模型:model.evaluate(x=...,y=...)

測試:

happy_model = HappyModel(X_train.shape[1:])
happy_model.compile('adam','binary_crossentropy',metrics=['accuracy'])
happy_model.fit(X_train,Y_train,epochs=10,batch_size=50)
preds = happy_model.evaluate(X_test,Y_test,batch_size=32,verbose=1,sample_weight=None)
print(preds[0])
print(preds[1])

結果:

Epoch 1/10
600/600 [==============================] - 8s 13ms/step - loss: 2.1831 - acc: 0.5567
Epoch 2/10
600/600 [==============================] - 8s 13ms/step - loss: 0.6674 - acc: 0.7700
Epoch 3/10
600/600 [==============================] - 9s 15ms/step - loss: 0.3431 - acc: 0.8383
Epoch 4/10
600/600 [==============================] - 8s 14ms/step - loss: 0.1949 - acc: 0.9133
Epoch 5/10
600/600 [==============================] - 8s 13ms/step - loss: 0.1515 - acc: 0.9433
Epoch 6/10
600/600 [==============================] - 8s 13ms/step - loss: 0.1541 - acc: 0.9433
Epoch 7/10
600/600 [==============================] - 8s 13ms/step - loss: 0.0998 - acc: 0.9700
Epoch 8/10
600/600 [==============================] - 8s 13ms/step - loss: 0.0950 - acc: 0.9717
Epoch 9/10
600/600 [==============================] - 8s 13ms/step - loss: 0.0777 - acc: 0.9750
Epoch 10/10
600/600 [==============================] - 8s 13ms/step - loss: 0.0733 - acc: 0.9783
150/150 [==============================] - 1s 6ms/step
0.4635606718063354
0.806666665871938

只要準確度大於75%就算正常,若你的準確度小於75%,可以嘗試改變模型。

X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
  • 可以在每個塊後面使用最大值池化層,它將會減少寬、高的維度。
  • 改變優化器,這裏使用的是Adam
  • 如果模型難以運行,並且遇到了內存不夠的問題,那麼就降低batch_size(12)
  • 運行更多代,直到看到有良好效果的時候。

1.4、測試你的圖片

img_path = '1.png'
img = image.load_img(img_path,target_size=(64,64))
imshow(img)

x = image.img_to_array(img)
x = np.expand_dims(x,axis=0)
x = preprocess_input(x)

print(happy_model.predict(x))

結果:

[[1.]]

1.5、其他一些有用的功能

  • model.summary():打印出你的每一層的大小細節
  • plot_model():繪製出佈局圖
happy_model.summary()

結果:

Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 64, 64, 3)         0         
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 70, 70, 3)         0         
_________________________________________________________________
conv0 (Conv2D)               (None, 64, 64, 32)        4736      
_________________________________________________________________
bn0 (BatchNormalization)     (None, 64, 64, 32)        128       
_________________________________________________________________
activation_1 (Activation)    (None, 64, 64, 32)        0         
_________________________________________________________________
max_pool (MaxPooling2D)      (None, 32, 32, 32)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 32768)             0         
_________________________________________________________________
fc (Dense)                   (None, 1)                 32769     
=================================================================
Total params: 37,633
Trainable params: 37,569
Non-trainable params: 64
_________________________________________________________________
%matplotlib inline
plot_model(happy_model,to_file='happy_model.png')
SVG(model_to_dot(happy_model).create(prog='dot',format='svg'))

在這裏插入圖片描述

二、殘差網絡的搭建

2.1、爲什麼使用殘差網絡

爲了解決深網絡的難以訓練的問題。

2.2、構建殘差網絡

本文,我們將:

  • 實習基本的殘差塊
  • 將這些殘差塊放在一起,實現並訓練用於圖像分類的神經網絡。
import numpy as np
import tensorflow as tf

from keras import layers
from keras.layers import Input,Add,Dense,Activation,ZeroPadding2D,BatchNormalization,Flatten,Conv2D,AveragePooling2D,MaxPooling2D,GlobalMaxPooling2D
from keras.models import Model,load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from keras.initializers import glorot_uniform

import pydot
from IPython.display import SVG
import scipy.misc
from matplotlib.pyplot import imshow
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)

import resnets_utils

2.2.1、恆等塊(identity block)

恆等塊是殘差網絡使用的標準塊,對應於輸入的激活值與輸出激活值具有相同的維度。如下圖所示:
在這裏插入圖片描述
上圖中,上面的曲線路徑是“捷徑”,下面的直線路徑是主路徑。在上圖中,我們依舊把conv2D與relu包含到了每個步驟中,爲了提升訓練的速度,我們在每一步也把數據進行了歸一化batchNorm。
在實踐中,跳躍連接會跳過3個隱藏層:
在這裏插入圖片描述
每個步驟如下:
1、主路徑的第一部分:

  • 第一個conv2D有F1F_1個過濾器,其大小爲(1,1),步長爲(1,1),使用填充方式爲“valid”,命名規則爲conv_name_base+'2a',使用0作爲隨機種子爲其初始化。
  • 第一個batchNorm是通道的軸歸一化,其命名規則爲 bn_name_base+'2a'
  • 接着使用relu激活函數,它沒有命名也沒有超參數。

2、主路徑的第二部分:

  • 第二個conv2D有F2F_2個過濾器,其大小爲(1,1),步長爲(1,1),使用填充方式爲“same”,命名規則爲conv_name_base+'2b',使用0作爲隨機種子爲其初始化。
  • 第一個batchNorm是通道的軸歸一化,其命名規則爲 bn_name_base+'2b'
  • 接着使用relu激活函數,它沒有命名也沒有超參數。

3、主路徑的第三部分:

  • 第三個conv2D有F3F_3個過濾器,其大小爲(1,1),步長爲(1,1),使用填充方式爲“valid”,命名規則爲conv_name_base+'2c',使用0作爲隨機種子爲其初始化。
  • 第一個batchNorm是通道的軸歸一化,其命名規則爲 bn_name_base+'2c'
  • 注意這裏沒有relu函數。

4、最後一步:

  • 將捷徑與輸入加在一起
  • 使用relu激活函數,它沒有命名也沒有超參數。
def identity_block(X,f,filters,stage,block):
    """
    實現圖3的恆等快
    
    參數:
        X - 輸入的tensor類型的數據,維度爲(m,n_H_prev,n_W_prev,n_H_prev)
        f - 整數,指定主路徑中間的CONV窗口的維度
        filters - 整數列表,定義了主路徑每層的卷積層的過濾器數量
        stage - 整數,根據每層的位置來命名每一層,與block參數一起使用
        block - 字符串,據每層的位置來命名每一層,與stage參數一起使用
    返回:
        X - 恆等塊的輸出,tensor類型,維度爲(n_H,n_W,n_C)
    """
    conv_name_base = "res" + str(stage) + block + "_branch"
    bn_name_base   = "bn"  + str(stage) + block + "_branch"
    
    F1,F2,F3 = filters
    
    X_shortcut = X
    
    X = Conv2D(filters = F1,kernel_size=(1,1),strides=(1,1),padding="valid",name=conv_name_base+"2a",kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2a")(X)
    X = Activation("relu")(X)
    
    X = Conv2D(filters=F2,kernel_size=(f,f),strides=(1,1),padding="same",name=conv_name_base+"2b",kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2b")(X)
    X = Activation("relu")(X)
    
    X = Conv2D(filters=F3,kernel_size=(1,1),strides=(1,1),padding="valid",name=conv_name_base+"2c",kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2c")(X)
    
    X = Add()([X,X_shortcut])
    X = Activation("relu")(X)
    
    return X

測試:

tf.reset_default_graph()
with tf.Session() as test:
    np.random.seed(1)
    A_prev = tf.placeholder("float",[3,4,4,6])
    X = np.random.randn(3,4,4,6)
    A = identity_block(A_prev,f=2,filters=[2,4,6],stage=1,block="a")
    
    test.run(tf.global_variables_initializer())
    out = test.run([A],feed_dict={A_prev:X,K.learning_phase():0})
    print("out = " + str(out[0][1][1][0]))
    
    test.close()

結果:

out = [0.94822985 0.         1.1610144  2.747859   0.         1.36677   ]

2.2.2、卷積塊

殘差網絡的卷積塊是另一種類型的殘差塊,它適用於輸入輸出的維度不一致的情況,它不同於上面的恆等塊,與之區別在於,捷徑中有一個conv2D層:
在這裏插入圖片描述
具體步驟:
1、主路徑的第一部分:

  • 第一個conv2D有F1F_1個過濾器,其大小爲(1,1),步長爲(s,s),使用填充方式爲“valid”,命名規則爲conv_name_base+'2a',使用0作爲隨機種子爲其初始化。
  • 第一個batchNorm是通道的軸歸一化,其命名規則爲 bn_name_base+'2a'
  • 接着使用relu激活函數,它沒有命名也沒有超參數。

2、主路徑的第二部分:

  • 第二個conv2D有F2F_2個過濾器,其大小爲(f,f),步長爲(1,1),使用填充方式爲“same”,命名規則爲conv_name_base+'2b',使用0作爲隨機種子爲其初始化。
  • 第一個batchNorm是通道的軸歸一化,其命名規則爲 bn_name_base+'2b'
  • 接着使用relu激活函數,它沒有命名也沒有超參數。

3、主路徑的第三部分:

  • 第三個conv2D有F3F_3個過濾器,其大小爲(1,1),步長爲(s,s),使用填充方式爲“valid”,命名規則爲conv_name_base+'2c',使用0作爲隨機種子爲其初始化。
  • 第一個batchNorm是通道的軸歸一化,其命名規則爲 bn_name_base+'2c'
  • 沒有relu函數。

4、捷徑:

  • 此卷積層有F3F_3個過濾器,其維度爲(1,1),步伐爲(s,s),適用“valid”的填充方式,命名規則爲conv_name_base+'1'
  • 此規範層是通道的軸歸一化,其命名規則爲bn_name_base+'1'

5、最後一步:

  • 將捷徑與輸入加在一起
  • 使用relu激活函數。
def convolutional_block(X,f,filters,stage,block,s=2):
    """
    實現圖5的卷積塊
    
    參數:
        X - 輸入的tensor類型的數據,維度爲(m,n_H_prev,n_W_prev,n_H_prev)
        f - 整數,指定主路徑中間的CONV窗口的維度
        filters - 整數列表,定義了主路徑每層的卷積層的過濾器數量
        stage - 整數,根據每層的位置來命名每一層,與block參數一起使用
        block - 字符串,據每層的位置來命名每一層,與stage參數一起使用
        s - 整數,指定要使用的步驟
    返回:
        X - 卷積塊的輸出,tensor類型,維度爲(n_H,n_W,n_C)
    """
    conv_name_base = "res" + str(stage) + block + "_branch"
    bn_name_base   = "bn"  + str(stage) + block + "_branch"
    
    F1,F2,F3 = filters
    
    X_shortcut = X
    
    X = Conv2D(filters = F1,kernel_size=(1,1),strides=(s,s),padding="valid",name=conv_name_base+"2a",kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2a")(X)
    X = Activation("relu")(X)
    
    X = Conv2D(filters=F2,kernel_size=(f,f),strides=(1,1),padding="same",name=conv_name_base+"2b",kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2b")(X)
    X = Activation("relu")(X)
    
    X = Conv2D(filters=F3,kernel_size=(1,1),strides=(1,1),padding="valid",name=conv_name_base+"2c",kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2c")(X)
    
    X_shortcut = Conv2D(filters = F3,kernel_size=(1,1),strides=(s,s),padding="valid",name=conv_name_base+"1",kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
    X_shortcut = BatchNormalization(axis=3,name=bn_name_base+"1")(X_shortcut)
    
    
    X = Add()([X,X_shortcut])
    X = Activation("relu")(X)
    
    return X

測試:

tf.reset_default_graph()
with tf.Session() as test:
    np.random.seed(1)
    A_prev = tf.placeholder("float",[3,4,4,6])
    X = np.random.randn(3,4,4,6)
    A = convolutional_block(A_prev,f=2,filters=[2,4,6],stage=1,block="a")
    
    test.run(tf.global_variables_initializer())
    out = test.run([A],feed_dict={A_prev:X,K.learning_phase():0})
    print("out = " + str(out[0][1][1][0]))
    
    test.close()

結果:

out = [0.09018463 1.2348977  0.46822017 0.0367176  0.         0.65516603]

2.3、構建你的第一個殘差網絡(50層)

已經做完所需要的所有殘差塊了,下面這個圖就描述了神經網絡的算法細節,圖中‘id block’是指標準的恆等塊,“id block x3”是指把三個恆等塊放在一起。
在這裏插入圖片描述
這50層的網絡的細節如下:
1、對輸入數據進行0填充,padding=(3,3)
2、stage1:

  • 卷積層有64個過濾器,其維度爲(7,7),步伐爲(2,2),命名爲‘conv1’
  • 規範層batchNorm對輸入數據進行通道軸歸一化
  • 最大值池化層使用一個(3,3)的窗口和(2,2)的步伐

3、stage2:

  • 卷積塊使用f=3個大小爲[64,64,256]的過濾器,f=3,s=1,block=‘a’
  • 2個恆等塊使用三個大小爲[64,64,256]的過濾器,f=3,block=‘b’、‘c’

4、stage3:

  • 卷積塊使用f=3個大小爲[128,128,512]的過濾器,f=3,s=2,block=‘a’
  • 3個恆等塊使用三個大小爲[128,128,512]的過濾器,f=3,block=‘b’、‘c’、‘d’
    5、stage4:
  • 卷積塊使用f=3個大小爲[256,256,1024]的過濾器,f=3,s=2,block=‘a’
  • 5個恆等塊使用三個大小爲[256,256,1024]的過濾器,f=3,block=‘b’、‘c’、‘d’、‘e’、‘f’
    6、stage5:
  • 卷積塊使用f=3個大小爲[512,512,2048]的過濾器,f=3,s=2,block=‘a’
  • 2個恆等塊使用三個大小爲[512,512,2048]的過濾器,f=3,block=‘b’、‘c’

7、均值池化層使用維度爲(2,2)的窗口,命名爲‘avg_pool’
8、展開操作沒有任何超參數以及命名
9、全連接層(密集連接)使用softmax激活函數,命名爲"fc"+str(classes)

def ResNet50(input_shape=(64,64,3),classes=6):
    """
    實現ResNet50
    conv2D->batchnorm->relu->maxpool->convblock->idblock*2->convblock->idblock*3->convblock->idblock*5->convblock->idblock*2->avgpool->toplayer
    
    參數:
        input_shape - 圖像數據集的維度
        classes - 整數,分類數
    返回:
        model - keras框架的模型
    """
    #定義tensor類型的輸入數據
    X_input = Input(input_shape)
    
    #0填充
    X = ZeroPadding2D((3,3))(X_input)
    
    #stage1
    X = Conv2D(filters=64, kernel_size=(7,7), strides=(2,2), name="conv1",
               kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name="bn_conv1")(X)
    X = Activation("relu")(X)
    X = MaxPooling2D(pool_size=(3,3), strides=(2,2))(X)
    
    #stage2
    X = convolutional_block(X, f=3, filters=[64,64,256], stage=2, block="a", s=1)
    X = identity_block(X, f=3, filters=[64,64,256], stage=2, block="b")
    X = identity_block(X, f=3, filters=[64,64,256], stage=2, block="c")
    
    #stage3
    X = convolutional_block(X, f=3, filters=[128,128,512], stage=3, block="a", s=2)
    X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="b")
    X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="c")
    X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="d")
    
    #stage4
    X = convolutional_block(X, f=3, filters=[256,256,1024], stage=4, block="a", s=2)
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="b")
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="c")
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="d")
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="e")
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="f")
    
    #stage5
    X = convolutional_block(X, f=3, filters=[512,512,2048], stage=5, block="a", s=2)
    X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block="b")
    X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block="c")
    
    #均值池化層
    X = AveragePooling2D(pool_size=(2,2),padding="same")(X)
    
    #輸出層
    X = Flatten()(X)
    X = Dense(classes, activation="softmax", name="fc"+str(classes),
              kernel_initializer=glorot_uniform(seed=0))(X)
    
    
    #創建模型
    model = Model(inputs=X_input, outputs=X, name="ResNet50")
    
    return model

對模型做實體化和編譯工作:

model = ResNet50(input_shape=(64,64,3),classes=6)
model.compile(optimizer="adam",loss="categorical_crossentropy",metrics=["accuracy"])
X_train_orig,Y_train_orig,X_test_orig,Y_test_orig,classes=resnets_utils.load_dataset()

X_train = X_train_orig/255
X_test = X_test_orig/255

Y_train = resnets_utils.convert_to_one_hot(Y_train_orig,6).T
Y_test = resnets_utils.convert_to_one_hot(Y_test_orig,6).T

print(X_train.shape[0],X_test.shape[0],X_train.shape,Y_train.shape,X_test.shape,Y_test.shape)

結果:

1080 120 (1080, 64, 64, 3) (1080, 6) (120, 64, 64, 3) (120, 6)

運行模型兩代,batch=32,每代大約3分鐘左右。

model.fit(X_train,Y_train,epochs=2,batch_size=32)

結果:

Epoch 1/2
1080/1080 [==============================] - 151s 140ms/step - loss: 3.0510 - acc: 0.2407
Epoch 2/2
1080/1080 [==============================] - 2597s 2s/step - loss: 2.1708 - acc: 0.3611
  • 12\frac{1}{2}epoch中,loss在1-5之間算正常,acc在0.2-0.5之間算正常。
  • 22\frac{2}{2}epoch中,loss在1-5之間算正常,acc在0.2-0.5之間算正常。可以看到損失在下降,準確度在上升。

評估模型:

preds = model.evaluate(X_test,Y_test)
print(preds[0],preds[1])

結果:

120/120 [==============================] - 4s 36ms/step
2.5867568492889403 0.16666666666666666

2.4、使用自己的圖片做測試

from PIL import Image
import numpy as np
import matplotlib.pyplot as plt # plt 用於顯示圖片

%matplotlib inline

img_path = 'images/fingers_big/2.jpg'

my_image = image.load_img(img_path, target_size=(64, 64))
my_image = image.img_to_array(my_image)

my_image = np.expand_dims(my_image,axis=0)
my_image = preprocess_input(my_image)

print("my_image.shape = " + str(my_image.shape))

print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(my_image))

my_image = scipy.misc.imread(img_path)
plt.imshow(my_image)

結果

my_image.shape = (1, 64, 64, 3)
class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = 
[[ 1.  0.  0.  0.  0.  0.]]
model.summary()

結果

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 64, 64, 3)    0                                            
__________________________________________________________________________________________________
zero_padding2d_1 (ZeroPadding2D (None, 70, 70, 3)    0           input_1[0][0]                    
__________________________________________________________________________________________________
conv1 (Conv2D)                  (None, 32, 32, 64)   9472        zero_padding2d_1[0][0]           
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization)   (None, 32, 32, 64)   256         conv1[0][0]                      
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 32, 32, 64)   0           bn_conv1[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 15, 15, 64)   0           activation_4[0][0]               
__________________________________________________________________________________________________
res2a_branch2a (Conv2D)         (None, 15, 15, 64)   4160        max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
bn2a_branch2a (BatchNormalizati (None, 15, 15, 64)   256         res2a_branch2a[0][0]             
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 15, 15, 64)   0           bn2a_branch2a[0][0]              
__________________________________________________________________________________________________
res2a_branch2b (Conv2D)         (None, 15, 15, 64)   36928       activation_5[0][0]               
__________________________________________________________________________________________________
bn2a_branch2b (BatchNormalizati (None, 15, 15, 64)   256         res2a_branch2b[0][0]             
__________________________________________________________________________________________________
activation_6 (Activation)       (None, 15, 15, 64)   0           bn2a_branch2b[0][0]              
__________________________________________________________________________________________________
res2a_branch2c (Conv2D)         (None, 15, 15, 256)  16640       activation_6[0][0]               
__________________________________________________________________________________________________
res2a_branch1 (Conv2D)          (None, 15, 15, 256)  16640       max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
bn2a_branch2c (BatchNormalizati (None, 15, 15, 256)  1024        res2a_branch2c[0][0]             
__________________________________________________________________________________________________
bn2a_branch1 (BatchNormalizatio (None, 15, 15, 256)  1024        res2a_branch1[0][0]              
__________________________________________________________________________________________________
add_2 (Add)                     (None, 15, 15, 256)  0           bn2a_branch2c[0][0]              
                                                                 bn2a_branch1[0][0]               
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 15, 15, 256)  0           add_2[0][0]                      
__________________________________________________________________________________________________
res2b_branch2a (Conv2D)         (None, 15, 15, 64)   16448       activation_7[0][0]               
__________________________________________________________________________________________________
bn2b_branch2a (BatchNormalizati (None, 15, 15, 64)   256         res2b_branch2a[0][0]             
__________________________________________________________________________________________________
activation_8 (Activation)       (None, 15, 15, 64)   0           bn2b_branch2a[0][0]              
__________________________________________________________________________________________________
res2b_branch2b (Conv2D)         (None, 15, 15, 64)   36928       activation_8[0][0]               
__________________________________________________________________________________________________
bn2b_branch2b (BatchNormalizati (None, 15, 15, 64)   256         res2b_branch2b[0][0]             
__________________________________________________________________________________________________
activation_9 (Activation)       (None, 15, 15, 64)   0           bn2b_branch2b[0][0]              
__________________________________________________________________________________________________
res2b_branch2c (Conv2D)         (None, 15, 15, 256)  16640       activation_9[0][0]               
__________________________________________________________________________________________________
bn2b_branch2c (BatchNormalizati (None, 15, 15, 256)  1024        res2b_branch2c[0][0]             
__________________________________________________________________________________________________
add_3 (Add)                     (None, 15, 15, 256)  0           bn2b_branch2c[0][0]              
                                                                 activation_7[0][0]               
__________________________________________________________________________________________________
activation_10 (Activation)      (None, 15, 15, 256)  0           add_3[0][0]                      
__________________________________________________________________________________________________
res2c_branch2a (Conv2D)         (None, 15, 15, 64)   16448       activation_10[0][0]              
__________________________________________________________________________________________________
bn2c_branch2a (BatchNormalizati (None, 15, 15, 64)   256         res2c_branch2a[0][0]             
__________________________________________________________________________________________________
activation_11 (Activation)      (None, 15, 15, 64)   0           bn2c_branch2a[0][0]              
__________________________________________________________________________________________________
res2c_branch2b (Conv2D)         (None, 15, 15, 64)   36928       activation_11[0][0]              
__________________________________________________________________________________________________
bn2c_branch2b (BatchNormalizati (None, 15, 15, 64)   256         res2c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_12 (Activation)      (None, 15, 15, 64)   0           bn2c_branch2b[0][0]              
__________________________________________________________________________________________________
res2c_branch2c (Conv2D)         (None, 15, 15, 256)  16640       activation_12[0][0]              
__________________________________________________________________________________________________
bn2c_branch2c (BatchNormalizati (None, 15, 15, 256)  1024        res2c_branch2c[0][0]             
__________________________________________________________________________________________________
add_4 (Add)                     (None, 15, 15, 256)  0           bn2c_branch2c[0][0]              
                                                                 activation_10[0][0]              
__________________________________________________________________________________________________
activation_13 (Activation)      (None, 15, 15, 256)  0           add_4[0][0]                      
__________________________________________________________________________________________________
res3a_branch2a (Conv2D)         (None, 8, 8, 128)    32896       activation_13[0][0]              
__________________________________________________________________________________________________
bn3a_branch2a (BatchNormalizati (None, 8, 8, 128)    512         res3a_branch2a[0][0]             
__________________________________________________________________________________________________
activation_14 (Activation)      (None, 8, 8, 128)    0           bn3a_branch2a[0][0]              
__________________________________________________________________________________________________
res3a_branch2b (Conv2D)         (None, 8, 8, 128)    147584      activation_14[0][0]              
__________________________________________________________________________________________________
bn3a_branch2b (BatchNormalizati (None, 8, 8, 128)    512         res3a_branch2b[0][0]             
__________________________________________________________________________________________________
activation_15 (Activation)      (None, 8, 8, 128)    0           bn3a_branch2b[0][0]              
__________________________________________________________________________________________________
res3a_branch2c (Conv2D)         (None, 8, 8, 512)    66048       activation_15[0][0]              
__________________________________________________________________________________________________
res3a_branch1 (Conv2D)          (None, 8, 8, 512)    131584      activation_13[0][0]              
__________________________________________________________________________________________________
bn3a_branch2c (BatchNormalizati (None, 8, 8, 512)    2048        res3a_branch2c[0][0]             
__________________________________________________________________________________________________
bn3a_branch1 (BatchNormalizatio (None, 8, 8, 512)    2048        res3a_branch1[0][0]              
__________________________________________________________________________________________________
add_5 (Add)                     (None, 8, 8, 512)    0           bn3a_branch2c[0][0]              
                                                                 bn3a_branch1[0][0]               
__________________________________________________________________________________________________
activation_16 (Activation)      (None, 8, 8, 512)    0           add_5[0][0]                      
__________________________________________________________________________________________________
res3b_branch2a (Conv2D)         (None, 8, 8, 128)    65664       activation_16[0][0]              
__________________________________________________________________________________________________
bn3b_branch2a (BatchNormalizati (None, 8, 8, 128)    512         res3b_branch2a[0][0]             
__________________________________________________________________________________________________
activation_17 (Activation)      (None, 8, 8, 128)    0           bn3b_branch2a[0][0]              
__________________________________________________________________________________________________
res3b_branch2b (Conv2D)         (None, 8, 8, 128)    147584      activation_17[0][0]              
__________________________________________________________________________________________________
bn3b_branch2b (BatchNormalizati (None, 8, 8, 128)    512         res3b_branch2b[0][0]             
__________________________________________________________________________________________________
activation_18 (Activation)      (None, 8, 8, 128)    0           bn3b_branch2b[0][0]              
__________________________________________________________________________________________________
res3b_branch2c (Conv2D)         (None, 8, 8, 512)    66048       activation_18[0][0]              
__________________________________________________________________________________________________
bn3b_branch2c (BatchNormalizati (None, 8, 8, 512)    2048        res3b_branch2c[0][0]             
__________________________________________________________________________________________________
add_6 (Add)                     (None, 8, 8, 512)    0           bn3b_branch2c[0][0]              
                                                                 activation_16[0][0]              
__________________________________________________________________________________________________
activation_19 (Activation)      (None, 8, 8, 512)    0           add_6[0][0]                      
__________________________________________________________________________________________________
res3c_branch2a (Conv2D)         (None, 8, 8, 128)    65664       activation_19[0][0]              
__________________________________________________________________________________________________
bn3c_branch2a (BatchNormalizati (None, 8, 8, 128)    512         res3c_branch2a[0][0]             
__________________________________________________________________________________________________
activation_20 (Activation)      (None, 8, 8, 128)    0           bn3c_branch2a[0][0]              
__________________________________________________________________________________________________
res3c_branch2b (Conv2D)         (None, 8, 8, 128)    147584      activation_20[0][0]              
__________________________________________________________________________________________________
bn3c_branch2b (BatchNormalizati (None, 8, 8, 128)    512         res3c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_21 (Activation)      (None, 8, 8, 128)    0           bn3c_branch2b[0][0]              
__________________________________________________________________________________________________
res3c_branch2c (Conv2D)         (None, 8, 8, 512)    66048       activation_21[0][0]              
__________________________________________________________________________________________________
bn3c_branch2c (BatchNormalizati (None, 8, 8, 512)    2048        res3c_branch2c[0][0]             
__________________________________________________________________________________________________
add_7 (Add)                     (None, 8, 8, 512)    0           bn3c_branch2c[0][0]              
                                                                 activation_19[0][0]              
__________________________________________________________________________________________________
activation_22 (Activation)      (None, 8, 8, 512)    0           add_7[0][0]                      
__________________________________________________________________________________________________
res3d_branch2a (Conv2D)         (None, 8, 8, 128)    65664       activation_22[0][0]              
__________________________________________________________________________________________________
bn3d_branch2a (BatchNormalizati (None, 8, 8, 128)    512         res3d_branch2a[0][0]             
__________________________________________________________________________________________________
activation_23 (Activation)      (None, 8, 8, 128)    0           bn3d_branch2a[0][0]              
__________________________________________________________________________________________________
res3d_branch2b (Conv2D)         (None, 8, 8, 128)    147584      activation_23[0][0]              
__________________________________________________________________________________________________
bn3d_branch2b (BatchNormalizati (None, 8, 8, 128)    512         res3d_branch2b[0][0]             
__________________________________________________________________________________________________
activation_24 (Activation)      (None, 8, 8, 128)    0           bn3d_branch2b[0][0]              
__________________________________________________________________________________________________
res3d_branch2c (Conv2D)         (None, 8, 8, 512)    66048       activation_24[0][0]              
__________________________________________________________________________________________________
bn3d_branch2c (BatchNormalizati (None, 8, 8, 512)    2048        res3d_branch2c[0][0]             
__________________________________________________________________________________________________
add_8 (Add)                     (None, 8, 8, 512)    0           bn3d_branch2c[0][0]              
                                                                 activation_22[0][0]              
__________________________________________________________________________________________________
activation_25 (Activation)      (None, 8, 8, 512)    0           add_8[0][0]                      
__________________________________________________________________________________________________
res4a_branch2a (Conv2D)         (None, 4, 4, 256)    131328      activation_25[0][0]              
__________________________________________________________________________________________________
bn4a_branch2a (BatchNormalizati (None, 4, 4, 256)    1024        res4a_branch2a[0][0]             
__________________________________________________________________________________________________
activation_26 (Activation)      (None, 4, 4, 256)    0           bn4a_branch2a[0][0]              
__________________________________________________________________________________________________
res4a_branch2b (Conv2D)         (None, 4, 4, 256)    590080      activation_26[0][0]              
__________________________________________________________________________________________________
bn4a_branch2b (BatchNormalizati (None, 4, 4, 256)    1024        res4a_branch2b[0][0]             
__________________________________________________________________________________________________
activation_27 (Activation)      (None, 4, 4, 256)    0           bn4a_branch2b[0][0]              
__________________________________________________________________________________________________
res4a_branch2c (Conv2D)         (None, 4, 4, 1024)   263168      activation_27[0][0]              
__________________________________________________________________________________________________
res4a_branch1 (Conv2D)          (None, 4, 4, 1024)   525312      activation_25[0][0]              
__________________________________________________________________________________________________
bn4a_branch2c (BatchNormalizati (None, 4, 4, 1024)   4096        res4a_branch2c[0][0]             
__________________________________________________________________________________________________
bn4a_branch1 (BatchNormalizatio (None, 4, 4, 1024)   4096        res4a_branch1[0][0]              
__________________________________________________________________________________________________
add_9 (Add)                     (None, 4, 4, 1024)   0           bn4a_branch2c[0][0]              
                                                                 bn4a_branch1[0][0]               
__________________________________________________________________________________________________
activation_28 (Activation)      (None, 4, 4, 1024)   0           add_9[0][0]                      
__________________________________________________________________________________________________
res4b_branch2a (Conv2D)         (None, 4, 4, 256)    262400      activation_28[0][0]              
__________________________________________________________________________________________________
bn4b_branch2a (BatchNormalizati (None, 4, 4, 256)    1024        res4b_branch2a[0][0]             
__________________________________________________________________________________________________
activation_29 (Activation)      (None, 4, 4, 256)    0           bn4b_branch2a[0][0]              
__________________________________________________________________________________________________
res4b_branch2b (Conv2D)         (None, 4, 4, 256)    590080      activation_29[0][0]              
__________________________________________________________________________________________________
bn4b_branch2b (BatchNormalizati (None, 4, 4, 256)    1024        res4b_branch2b[0][0]             
__________________________________________________________________________________________________
activation_30 (Activation)      (None, 4, 4, 256)    0           bn4b_branch2b[0][0]              
__________________________________________________________________________________________________
res4b_branch2c (Conv2D)         (None, 4, 4, 1024)   263168      activation_30[0][0]              
__________________________________________________________________________________________________
bn4b_branch2c (BatchNormalizati (None, 4, 4, 1024)   4096        res4b_branch2c[0][0]             
__________________________________________________________________________________________________
add_10 (Add)                    (None, 4, 4, 1024)   0           bn4b_branch2c[0][0]              
                                                                 activation_28[0][0]              
__________________________________________________________________________________________________
activation_31 (Activation)      (None, 4, 4, 1024)   0           add_10[0][0]                     
__________________________________________________________________________________________________
res4c_branch2a (Conv2D)         (None, 4, 4, 256)    262400      activation_31[0][0]              
__________________________________________________________________________________________________
bn4c_branch2a (BatchNormalizati (None, 4, 4, 256)    1024        res4c_branch2a[0][0]             
__________________________________________________________________________________________________
activation_32 (Activation)      (None, 4, 4, 256)    0           bn4c_branch2a[0][0]              
__________________________________________________________________________________________________
res4c_branch2b (Conv2D)         (None, 4, 4, 256)    590080      activation_32[0][0]              
__________________________________________________________________________________________________
bn4c_branch2b (BatchNormalizati (None, 4, 4, 256)    1024        res4c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_33 (Activation)      (None, 4, 4, 256)    0           bn4c_branch2b[0][0]              
__________________________________________________________________________________________________
res4c_branch2c (Conv2D)         (None, 4, 4, 1024)   263168      activation_33[0][0]              
__________________________________________________________________________________________________
bn4c_branch2c (BatchNormalizati (None, 4, 4, 1024)   4096        res4c_branch2c[0][0]             
__________________________________________________________________________________________________
add_11 (Add)                    (None, 4, 4, 1024)   0           bn4c_branch2c[0][0]              
                                                                 activation_31[0][0]              
__________________________________________________________________________________________________
activation_34 (Activation)      (None, 4, 4, 1024)   0           add_11[0][0]                     
__________________________________________________________________________________________________
res4d_branch2a (Conv2D)         (None, 4, 4, 256)    262400      activation_34[0][0]              
__________________________________________________________________________________________________
bn4d_branch2a (BatchNormalizati (None, 4, 4, 256)    1024        res4d_branch2a[0][0]             
__________________________________________________________________________________________________
activation_35 (Activation)      (None, 4, 4, 256)    0           bn4d_branch2a[0][0]              
__________________________________________________________________________________________________
res4d_branch2b (Conv2D)         (None, 4, 4, 256)    590080      activation_35[0][0]              
__________________________________________________________________________________________________
bn4d_branch2b (BatchNormalizati (None, 4, 4, 256)    1024        res4d_branch2b[0][0]             
__________________________________________________________________________________________________
activation_36 (Activation)      (None, 4, 4, 256)    0           bn4d_branch2b[0][0]              
__________________________________________________________________________________________________
res4d_branch2c (Conv2D)         (None, 4, 4, 1024)   263168      activation_36[0][0]              
__________________________________________________________________________________________________
bn4d_branch2c (BatchNormalizati (None, 4, 4, 1024)   4096        res4d_branch2c[0][0]             
__________________________________________________________________________________________________
add_12 (Add)                    (None, 4, 4, 1024)   0           bn4d_branch2c[0][0]              
                                                                 activation_34[0][0]              
__________________________________________________________________________________________________
activation_37 (Activation)      (None, 4, 4, 1024)   0           add_12[0][0]                     
__________________________________________________________________________________________________
res4e_branch2a (Conv2D)         (None, 4, 4, 256)    262400      activation_37[0][0]              
__________________________________________________________________________________________________
bn4e_branch2a (BatchNormalizati (None, 4, 4, 256)    1024        res4e_branch2a[0][0]             
__________________________________________________________________________________________________
activation_38 (Activation)      (None, 4, 4, 256)    0           bn4e_branch2a[0][0]              
__________________________________________________________________________________________________
res4e_branch2b (Conv2D)         (None, 4, 4, 256)    590080      activation_38[0][0]              
__________________________________________________________________________________________________
bn4e_branch2b (BatchNormalizati (None, 4, 4, 256)    1024        res4e_branch2b[0][0]             
__________________________________________________________________________________________________
activation_39 (Activation)      (None, 4, 4, 256)    0           bn4e_branch2b[0][0]              
__________________________________________________________________________________________________
res4e_branch2c (Conv2D)         (None, 4, 4, 1024)   263168      activation_39[0][0]              
__________________________________________________________________________________________________
bn4e_branch2c (BatchNormalizati (None, 4, 4, 1024)   4096        res4e_branch2c[0][0]             
__________________________________________________________________________________________________
add_13 (Add)                    (None, 4, 4, 1024)   0           bn4e_branch2c[0][0]              
                                                                 activation_37[0][0]              
__________________________________________________________________________________________________
activation_40 (Activation)      (None, 4, 4, 1024)   0           add_13[0][0]                     
__________________________________________________________________________________________________
res4f_branch2a (Conv2D)         (None, 4, 4, 256)    262400      activation_40[0][0]              
__________________________________________________________________________________________________
bn4f_branch2a (BatchNormalizati (None, 4, 4, 256)    1024        res4f_branch2a[0][0]             
__________________________________________________________________________________________________
activation_41 (Activation)      (None, 4, 4, 256)    0           bn4f_branch2a[0][0]              
__________________________________________________________________________________________________
res4f_branch2b (Conv2D)         (None, 4, 4, 256)    590080      activation_41[0][0]              
__________________________________________________________________________________________________
bn4f_branch2b (BatchNormalizati (None, 4, 4, 256)    1024        res4f_branch2b[0][0]             
__________________________________________________________________________________________________
activation_42 (Activation)      (None, 4, 4, 256)    0           bn4f_branch2b[0][0]              
__________________________________________________________________________________________________
res4f_branch2c (Conv2D)         (None, 4, 4, 1024)   263168      activation_42[0][0]              
__________________________________________________________________________________________________
bn4f_branch2c (BatchNormalizati (None, 4, 4, 1024)   4096        res4f_branch2c[0][0]             
__________________________________________________________________________________________________
add_14 (Add)                    (None, 4, 4, 1024)   0           bn4f_branch2c[0][0]              
                                                                 activation_40[0][0]              
__________________________________________________________________________________________________
activation_43 (Activation)      (None, 4, 4, 1024)   0           add_14[0][0]                     
__________________________________________________________________________________________________
res5a_branch2a (Conv2D)         (None, 2, 2, 512)    524800      activation_43[0][0]              
__________________________________________________________________________________________________
bn5a_branch2a (BatchNormalizati (None, 2, 2, 512)    2048        res5a_branch2a[0][0]             
__________________________________________________________________________________________________
activation_44 (Activation)      (None, 2, 2, 512)    0           bn5a_branch2a[0][0]              
__________________________________________________________________________________________________
res5a_branch2b (Conv2D)         (None, 2, 2, 512)    2359808     activation_44[0][0]              
__________________________________________________________________________________________________
bn5a_branch2b (BatchNormalizati (None, 2, 2, 512)    2048        res5a_branch2b[0][0]             
__________________________________________________________________________________________________
activation_45 (Activation)      (None, 2, 2, 512)    0           bn5a_branch2b[0][0]              
__________________________________________________________________________________________________
res5a_branch2c (Conv2D)         (None, 2, 2, 2048)   1050624     activation_45[0][0]              
__________________________________________________________________________________________________
res5a_branch1 (Conv2D)          (None, 2, 2, 2048)   2099200     activation_43[0][0]              
__________________________________________________________________________________________________
bn5a_branch2c (BatchNormalizati (None, 2, 2, 2048)   8192        res5a_branch2c[0][0]             
__________________________________________________________________________________________________
bn5a_branch1 (BatchNormalizatio (None, 2, 2, 2048)   8192        res5a_branch1[0][0]              
__________________________________________________________________________________________________
add_15 (Add)                    (None, 2, 2, 2048)   0           bn5a_branch2c[0][0]              
                                                                 bn5a_branch1[0][0]               
__________________________________________________________________________________________________
activation_46 (Activation)      (None, 2, 2, 2048)   0           add_15[0][0]                     
__________________________________________________________________________________________________
res5b_branch2a (Conv2D)         (None, 2, 2, 512)    1049088     activation_46[0][0]              
__________________________________________________________________________________________________
bn5b_branch2a (BatchNormalizati (None, 2, 2, 512)    2048        res5b_branch2a[0][0]             
__________________________________________________________________________________________________
activation_47 (Activation)      (None, 2, 2, 512)    0           bn5b_branch2a[0][0]              
__________________________________________________________________________________________________
res5b_branch2b (Conv2D)         (None, 2, 2, 512)    2359808     activation_47[0][0]              
__________________________________________________________________________________________________
bn5b_branch2b (BatchNormalizati (None, 2, 2, 512)    2048        res5b_branch2b[0][0]             
__________________________________________________________________________________________________
activation_48 (Activation)      (None, 2, 2, 512)    0           bn5b_branch2b[0][0]              
__________________________________________________________________________________________________
res5b_branch2c (Conv2D)         (None, 2, 2, 2048)   1050624     activation_48[0][0]              
__________________________________________________________________________________________________
bn5b_branch2c (BatchNormalizati (None, 2, 2, 2048)   8192        res5b_branch2c[0][0]             
__________________________________________________________________________________________________
add_16 (Add)                    (None, 2, 2, 2048)   0           bn5b_branch2c[0][0]              
                                                                 activation_46[0][0]              
__________________________________________________________________________________________________
activation_49 (Activation)      (None, 2, 2, 2048)   0           add_16[0][0]                     
__________________________________________________________________________________________________
res5c_branch2a (Conv2D)         (None, 2, 2, 512)    1049088     activation_49[0][0]              
__________________________________________________________________________________________________
bn5c_branch2a (BatchNormalizati (None, 2, 2, 512)    2048        res5c_branch2a[0][0]             
__________________________________________________________________________________________________
activation_50 (Activation)      (None, 2, 2, 512)    0           bn5c_branch2a[0][0]              
__________________________________________________________________________________________________
res5c_branch2b (Conv2D)         (None, 2, 2, 512)    2359808     activation_50[0][0]              
__________________________________________________________________________________________________
bn5c_branch2b (BatchNormalizati (None, 2, 2, 512)    2048        res5c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_51 (Activation)      (None, 2, 2, 512)    0           bn5c_branch2b[0][0]              
__________________________________________________________________________________________________
res5c_branch2c (Conv2D)         (None, 2, 2, 2048)   1050624     activation_51[0][0]              
__________________________________________________________________________________________________
bn5c_branch2c (BatchNormalizati (None, 2, 2, 2048)   8192        res5c_branch2c[0][0]             
__________________________________________________________________________________________________
add_17 (Add)                    (None, 2, 2, 2048)   0           bn5c_branch2c[0][0]              
                                                                 activation_49[0][0]              
__________________________________________________________________________________________________
activation_52 (Activation)      (None, 2, 2, 2048)   0           add_17[0][0]                     
__________________________________________________________________________________________________
average_pooling2d_1 (AveragePoo (None, 1, 1, 2048)   0           activation_52[0][0]              
__________________________________________________________________________________________________
flatten_1 (Flatten)             (None, 2048)         0           average_pooling2d_1[0][0]        
__________________________________________________________________________________________________
fc6 (Dense)                     (None, 6)            12294       flatten_1[0][0]                  
==================================================================================================
Total params: 23,600,006
Trainable params: 23,546,886
Non-trainable params: 53,120
__________________________________________________________________________________________________
plot_model(model,to_file='model.png')
SVG(model_to_dot(model).create(prog='dot',format='svg'))

結果:略

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章