深度學習第四周--第三課目標檢測代碼

聲明

本文參考何寬

前言

本文是爲了用yolo算法實現汽車識別。
爲了收集數據,在汽車前引擎蓋上安裝一個照相機,在開車的時候會每隔幾秒拍攝一次前方的道路。讓yolo識別80個分類,因此把它標記爲80維的向量,或者把分類標籤c從1到80進行標記。我們會使用預先訓練好的權重來進行使用。
**yolo算法:**實時且高準確率。在預測時只需要進行一次前向傳播,在使用非最大值抑制後,它與邊界框一起輸出識別對象。

模型細節

  • 輸入的批量圖片維度是(m,608,608,3)
  • 輸出是一個識別分類與邊界框的列表,每個邊界框由6個數字組成:(px,bx,by,bh,bw,c)(p_x,b_x,b_y,b_h,b_w,c)。若將c放到80維的向量中,那麼每個邊界框就由85個數字組成。

使用5個錨框,算法流程:圖像輸入(m,608,608,3)-》deep cnn-》編碼(m,608,608,85)。
在這裏插入圖片描述
如果對象的中心、中點在單元格內,那麼該單元格就負責識別該對象。使用5個錨框,19x19的單元格,所以每個單元格內有5個錨框的編碼信息,錨框的組成是pc+px+py+ph+pwp_c+p_x+p_y+p_h+p_w。把最後兩個維度的數據進行展開,最後一步的編碼由(m,19,19,5,85)變爲了(m,19,19,425)。
對於每個單元格的每個錨框而言,將計算下列元素的乘積,並提取該框包含某一類的概率。
在這裏插入圖片描述
步驟:

  • 分類閾值過濾
  • 非最大值抑制(交併比、非最大值抑制)

分類閾值過濾

導入包:

import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model

from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body

import yolo_utils

%matplotlib inline

要爲閾值進行過濾,去掉一些預測值低於預設值的錨框,模型共計會有19x19x5x85個數字,每個錨框由85個數字組成(80+pc+px+py+ph+pw80個分類+p_c+p_x+p_y+p_h+p_w),將維度爲(19,19,5,85)或者(19,19,425)轉換爲下面的維度:

  • box_confidence:tensor類型,維度爲(19,19,5,1),包含19x19單元格每個單元格預測的5個錨框中的所有錨框的pcp_c(一些對象的置信概率)。
  • boxes:tensor類型,維度爲(19,19,5,4),包含了所有的錨框的pxpyphpw(p_x,p_y,p_h,p_w)
  • box_class_probs:tensor類型,維度爲(19,19,5,80),包含了所有單元格中所有錨框的所有對象(c1,c2,c3,...,c80)(c_1,c_2,c_3,...,c_{80})檢測的概率。

步驟:
1、計算對象的可能性
2、對於每個錨框,需要找到:
2.1、對分類的預測的概率擁有最大值的錨框的索引。
2.2、對應的最大值的錨框。
3、根據閾值來創建掩碼。
4、使用tensorflow來對box_class_scores、boxes、box_classes進行掩碼操作以過濾出我們想要的錨框。

def yolo_filter_boxes(box_confidence,boxes,box_class_probs,threshold=0.6):
    """
    通過閾值來過濾對象和分類的置信度。
    
    參數:
        box_confidence - tnesor類型,維度爲(19,19,5,1),包含19x19單元格預測的5個錨框中的所有的錨框的pc(一些對象的置信概率)。
        boxes - tensor類型,維度爲(19,19,5,4),包含所有的錨框的(px,py,ph,pw)。
        box_class_probs - tensor類型,維度爲(19,19,5,80),包含了所有單元格中所有錨框的所有對象(c1,c2,c3,...,c80)檢測的概率
        threshold - 實數,閾值,如果分類預測的概率高於它,那麼這個分類預測的概率就會被保留。
        
    返回:
        scores - tensor類型,維度爲(None,),包含了保留了的錨框的分類概率。
        boxes - tensor類型,維度爲(None,4),包含了保留了的錨框的(b_x,b_y,b_h,b_w)
        classes - tensot類型,維度爲(None,1),包含了保留了的錨框的索引。
        
    注意:“None”是因爲你不知道所選框的確切數量,因爲它取決於閾值。
        比如:如果有10個錨框,scores的實際輸出大小將是(10,)
    """
    box_scores = box_confidence * box_class_probs
    
    box_classes = K.argmax(box_scores,axis=-1)
    box_class_scores = K.max(box_scores,axis=-1)
    
    filtering_mask = (box_class_scores >= threshold )
    
    scores = tf.boolean_mask(box_class_scores,filtering_mask)
    boxes = tf.boolean_mask(boxes,filtering_mask)
    classes = tf.boolean_mask(box_classes,filtering_mask)
    
    return scores,boxes,classes

測試:

with tf.Session() as test_a:
    box_confidence = tf.random_normal([19,19,5,1],mean=1,stddev=4,seed=1)
    boxes = tf.random_normal([19,19,5,4],mean=1,stddev=4,seed=1)
    box_class_probs = tf.random_normal([19,19,5,80],mean=1,stddev=4,seed=1)
    scores,boxes,classes = yolo_filter_boxes(box_confidence,boxes,box_class_probs,threshold=0.5)
    print(scores[2].eval(),boxes[2].eval(),classes[2].eval(),scores.shape,boxes.shape,classes.shape)
    
    test_a.close()

結果:

10.750582 [ 8.426533   3.2713668 -0.5313436 -4.9413733] 7 (?,) (?, 4) (?,)

非最大值抑制

即使通過閾值過濾了一些得分較低的分類,但是依舊會有很多的錨框被留了下來,第二個過濾器就是讓下圖左邊變爲右邊,這就是非最大值抑制。
在這裏插入圖片描述

交併比

非最大值抑制使用了一個非常重要的功能,叫做交併比,現在來實現它,步驟:

  • 使用左上和右下角來定義方框(x1,y1,x2,y2)(x_1,y_1,x_2,y_2)而不是使用中點+寬高的方式定義。
  • 計算矩形的面積,需要用到高度(y2y1)(x2x1)(y_2 - y_1)*(x_2-x_1)
  • 找到兩個錨框的交點的座標(x1i,y1i,x2i,y2i)(x_1^i,y_1^i,x_2^i,y_2^i)
    - x1ix_1^i=兩個錨框的x1座標的最大值
    - y1iy_1^i=兩個錨框的y1座標的最大值
    - x2ix_2^i=兩個錨框的x2座標的最小值
    - y2iy_2^i=兩個錨框的y2座標的最小值
  • 爲了計算相交的區域,需要確定相交的區域的寬、高均爲正數,否則就爲0。
def iou(box1,box2):
    """
    實現兩個錨框的交併比的計算
    
    參數:
        box1 - 第一個錨框,元組類型,(x1,y1,x2,y2)
        box2 - 第二個錨框,元組類型,(x1,y1,x2,y2)
    返回:
        iou - 實數,交併比
    """
    xi1 = np.maximum(box1[0],box2[0])
    yi1 = np.maximum(box1[1],box2[1])
    xi2 = np.minimum(box1[2],box2[2])
    yi2 = np.minimum(box1[3],box2[3])
    inter_area = (xi1-xi2)*(yi1-yi2)
    
    box1_area = (box1[2]-box1[0])*(box1[3]-box1[1])
    box2_area = (box2[2]-box2[0])*(box2[3]-box2[1])
    union_area = box1_area + box2_area - inter_area
    
    iou = inter_area/union_area
    
    return iou

測試:

box1 = (2,1,4,3)
box2 = (1,2,3,4)
iou = iou(box1,box2)
print(iou)

結果:

0.14285714285714285

非最大值抑制

步驟:

  • 選擇分值高的錨框。
  • 計算與其他框的重疊部分,並刪除與iou_threshold相比重疊的框。
  • 返回第一步,直到不再有比當前選中的框得分更低的框。
def yolo_non_max_suppression(scores,boxes,classes,max_boxes=10,iou_threshold=0.5):
    """
    爲錨框實現非最大值抑制(Non-max suppression(NMS))
    
    參數:
        scores - tensor類型,維度爲(None,),yolo_filter_boxes()的輸出
        boxes - tensor類型,維度爲(None,4),yolo_filter_boxes()的輸出,已縮放到圖像大小(見下文)
        classes - tensor類型,維度爲(None,),yolo_filter_boxes()的輸出,
        max_boxes - 整數,預測的錨框數量的最大值
        iou_threshold - 實數,交併比閾值
    返回:
        scores - tensor類型,維度爲(,None),每個錨框的預測的可能值
        boxes - tensor類型,維度爲(4,None),預測的錨框的座標
        classes - tensor類型,維度爲(,None),每個錨框的預測的分類
        
    注意:“None”是明顯小於max_boxes的,這個函數也會改變scores、boxes、classes的維度,這會爲下一步操作提供方便。
    """
    max_boxes_tensor = K.variable(max_boxes,dtype="int32")
    K.get_session().run(tf.variables_initializer([max_boxes_tensor]))
    
    nms_indices = tf.image.non_max_suppression(boxes,scores,max_boxes,iou_threshold)
    
    scores = K.gather(scores,nms_indices)
    boxes = K.gather(boxes,nms_indices)
    classes = K.gather(classes,nms_indices)
    
    return scores,boxes,classes

測試:

with tf.Session() as test_b:
    scores = tf.random_normal([54,],mean=1,stddev=4,seed=1)
    boxes = tf.random_normal([54,4],mean=1,stddev=4,seed=1)
    classes = tf.random_normal([54,],mean=1,stddev=4,seed=1)
    scores,boxes,classes = yolo_non_max_suppression(scores,boxes,classes,max_boxes=10,iou_threshold=0.5)
    print(scores[2].eval(),boxes[2].eval(),classes[2].eval(),scores.eval().shape,boxes.eval().shape,classes.eval().shape)
    
    test_b.close()

結果:

6.938395 [-5.299932    3.1379814   4.450367    0.95942086] -2.2452729 (10,) (10, 4) (10,)

對所有框進行過濾

現在我們要實現一個cnn輸出的函數,並使用剛剛實現的函數對所有框進行過濾,我們要實現的函數名爲yolo_eval(),它採用yolo編碼的輸出,並使用分數閾值和非最大值抑制來過濾這些框,我們必須清楚知道有幾種表示錨框的方式。

  • boxes=yolo_boxes_to_corners(box_xy,box_wh)

將yolo錨框座標(x,y,w,h)轉換爲角的座標(x1,y1,x2,y2)以適應yolo_filter_boxes()的輸入。

  • boxes=yolo_utils.scale_boxes(boxes,image_shape)

步驟:

  • 輸入圖像爲(608,608,3)
  • 輸入的圖像先要通過一個cnn模型,返回一個(19,19,5,85)的數據
  • 在對最後兩維降維之後,輸出的維度變爲了(19,19,425):
    • 每個19x19的單元格擁有425個數字。
    • 425=5x85,即每個單元格擁有5個錨框,每個錨框由5個基本信息+80個分類預測構成。
    • 85=5+80,其中5個基本信息是(pc,px,py,ph,pw)(p_c,p_x,p_y,p_h,p_w),剩下80就是80個分類的預測。
  • 然後根據以下規則選擇錨框:
    • 預測分數閾值:丟棄分數低於閾值的分類的錨框
    • 非最大值抑制:計算交併比,並避免選擇重疊框
  • 最後給出yolo的最終輸出。
def yolo_eval(yolo_outputs,image_shape=(720.,1280.),max_boxes=10,score_threshold=0.6,iou_threshold=0.5):
    """
    將YOLO編碼的輸出(很多錨框)轉換爲預測框以及它們的分數,框座標和類。
    
    參數:
        yolo_outputs - 編碼模型的輸出(對於維度爲(608,608,3)的圖片),包含了4個tensor類型的變量:
                        box_confidence : tensor類型,維度爲(None,19,19,5,1)
                        box_xy         : tensor類型,維度爲(None,19,19,5,2)
                        box_wh         : tensor類型,維度爲(None,19,19,5,2)
                        box_class_probs: tensor類型,維度爲(None,19,19,5,80)
        image_shape - tensor類型,維度爲(2,),包含了輸入的圖像的維度,這裏是(608,608,)
        max_boxes - 整數,預測的錨框數量的最大值
        acore_threshold - 實數,可能性閾值
        iou_threshold - 實數,交併比閾值
        
    返回:
        scores - tensor類型,維度爲(,None),每個錨框的預測的可能值
        boxes - tensor類型,維度爲(4,None),預測的錨框的座標
        classes - tensor類型,維度爲(,None),每個錨框的預測的分類
    """
    box_confidence,box_xy,box_wh,box_class_probs = yolo_outputs
    
    boxes = yolo_boxes_to_corners(box_xy,box_wh)
    
    scores,boxes,classes = yolo_filter_boxes(box_confidence,boxes,box_class_probs,score_threshold)
    
    boxes = yolo_utils.scale_boxes(boxes,image_shape)
    
    scores,boxes,classes = yolo_non_max_suppression(scores,boxes,classes,max_boxes,iou_threshold)
    
    return scores,boxes,classes

測試:

with tf.Session() as test_c:
    yolo_outputs = (tf.random_normal([19,19,5,1],mean=1,stddev=4,seed=1),
                    tf.random_normal([19,19,5,2],mean=1,stddev=4,seed=1),
                    tf.random_normal([19,19,5,2],mean=1,stddev=4,seed=1),
                    tf.random_normal([19,19,5,80],mean=1,stddev=4,seed=1),
                   )
    scores,boxes,classes = yolo_eval(yolo_outputs)
    print(scores[2].eval(),boxes[2].eval(),classes[2].eval(),scores.eval().shape,boxes.eval().shape,classes.eval().shape)
    
    test_c.close()

結果:

138.79124 [1292.3297  -278.52167 3876.9893  -835.56494] 54 (10,) (10, 4) (10,)

測試已經訓練好了的yolo模型

將使用一個預先訓練好的模型並在汽車檢測數據集上進行測試,首先創建一個會話來啓動計算圖:

sess = K.get_session()

定義分類,錨框與圖像維度

收集了兩個文件“coco_classes.txt”和"yolo_anchors.txt"中關於80個分類和5個錨框的信息,將這些數據加載到模型中。

class_names = yolo_utils.read_classes("model_data/coco_classes.txt")
anchors = yolo_utils.read_anchors("model_data/yolo_anchors.txt")
image_shape = (720.,1280.)

加載已經訓練好了的模型

訓練yolo模型需要很長時間,並且需要一個相當大的標籤邊界框數據集,用於大範圍的目標類,將加載存儲在“yolov2.h5”中的現有預訓練Keras yolo模型。這會加載訓練的yolo模型的權重。

yolo_model = load_model("model_data/yolov2.h5")

以下是模型包含的圖層的摘要:

yolo_model.summary()

結果:

Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 608, 608, 3)  0                                            
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 608, 608, 32) 864         input_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 608, 608, 32) 128         conv2d_1[0][0]                   
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU)       (None, 608, 608, 32) 0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 304, 304, 32) 0           leaky_re_lu_1[0][0]              
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 304, 304, 64) 18432       max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 304, 304, 64) 256         conv2d_2[0][0]                   
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU)       (None, 304, 304, 64) 0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D)  (None, 152, 152, 64) 0           leaky_re_lu_2[0][0]              
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 152, 152, 128 73728       max_pooling2d_2[0][0]            
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 152, 152, 128 512         conv2d_3[0][0]                   
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU)       (None, 152, 152, 128 0           batch_normalization_3[0][0]      
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 152, 152, 64) 8192        leaky_re_lu_3[0][0]              
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 152, 152, 64) 256         conv2d_4[0][0]                   
__________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU)       (None, 152, 152, 64) 0           batch_normalization_4[0][0]      
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 152, 152, 128 73728       leaky_re_lu_4[0][0]              
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 152, 152, 128 512         conv2d_5[0][0]                   
__________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU)       (None, 152, 152, 128 0           batch_normalization_5[0][0]      
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D)  (None, 76, 76, 128)  0           leaky_re_lu_5[0][0]              
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 76, 76, 256)  294912      max_pooling2d_3[0][0]            
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 76, 76, 256)  1024        conv2d_6[0][0]                   
__________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU)       (None, 76, 76, 256)  0           batch_normalization_6[0][0]      
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 76, 76, 128)  32768       leaky_re_lu_6[0][0]              
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 76, 76, 128)  512         conv2d_7[0][0]                   
__________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU)       (None, 76, 76, 128)  0           batch_normalization_7[0][0]      
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 76, 76, 256)  294912      leaky_re_lu_7[0][0]              
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 76, 76, 256)  1024        conv2d_8[0][0]                   
__________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU)       (None, 76, 76, 256)  0           batch_normalization_8[0][0]      
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D)  (None, 38, 38, 256)  0           leaky_re_lu_8[0][0]              
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 38, 38, 512)  1179648     max_pooling2d_4[0][0]            
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 38, 38, 512)  2048        conv2d_9[0][0]                   
__________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU)       (None, 38, 38, 512)  0           batch_normalization_9[0][0]      
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 38, 38, 256)  131072      leaky_re_lu_9[0][0]              
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 38, 38, 256)  1024        conv2d_10[0][0]                  
__________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU)      (None, 38, 38, 256)  0           batch_normalization_10[0][0]     
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, 38, 38, 512)  1179648     leaky_re_lu_10[0][0]             
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 38, 38, 512)  2048        conv2d_11[0][0]                  
__________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU)      (None, 38, 38, 512)  0           batch_normalization_11[0][0]     
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, 38, 38, 256)  131072      leaky_re_lu_11[0][0]             
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 38, 38, 256)  1024        conv2d_12[0][0]                  
__________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU)      (None, 38, 38, 256)  0           batch_normalization_12[0][0]     
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, 38, 38, 512)  1179648     leaky_re_lu_12[0][0]             
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 38, 38, 512)  2048        conv2d_13[0][0]                  
__________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU)      (None, 38, 38, 512)  0           batch_normalization_13[0][0]     
__________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D)  (None, 19, 19, 512)  0           leaky_re_lu_13[0][0]             
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, 19, 19, 1024) 4718592     max_pooling2d_5[0][0]            
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 19, 19, 1024) 4096        conv2d_14[0][0]                  
__________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU)      (None, 19, 19, 1024) 0           batch_normalization_14[0][0]     
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (None, 19, 19, 512)  524288      leaky_re_lu_14[0][0]             
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 19, 19, 512)  2048        conv2d_15[0][0]                  
__________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU)      (None, 19, 19, 512)  0           batch_normalization_15[0][0]     
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (None, 19, 19, 1024) 4718592     leaky_re_lu_15[0][0]             
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 19, 19, 1024) 4096        conv2d_16[0][0]                  
__________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU)      (None, 19, 19, 1024) 0           batch_normalization_16[0][0]     
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (None, 19, 19, 512)  524288      leaky_re_lu_16[0][0]             
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 19, 19, 512)  2048        conv2d_17[0][0]                  
__________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU)      (None, 19, 19, 512)  0           batch_normalization_17[0][0]     
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (None, 19, 19, 1024) 4718592     leaky_re_lu_17[0][0]             
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 19, 19, 1024) 4096        conv2d_18[0][0]                  
__________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU)      (None, 19, 19, 1024) 0           batch_normalization_18[0][0]     
__________________________________________________________________________________________________
conv2d_19 (Conv2D)              (None, 19, 19, 1024) 9437184     leaky_re_lu_18[0][0]             
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 19, 19, 1024) 4096        conv2d_19[0][0]                  
__________________________________________________________________________________________________
conv2d_21 (Conv2D)              (None, 38, 38, 64)   32768       leaky_re_lu_13[0][0]             
__________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU)      (None, 19, 19, 1024) 0           batch_normalization_19[0][0]     
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, 38, 38, 64)   256         conv2d_21[0][0]                  
__________________________________________________________________________________________________
conv2d_20 (Conv2D)              (None, 19, 19, 1024) 9437184     leaky_re_lu_19[0][0]             
__________________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU)      (None, 38, 38, 64)   0           batch_normalization_21[0][0]     
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, 19, 19, 1024) 4096        conv2d_20[0][0]                  
__________________________________________________________________________________________________
space_to_depth_x2 (Lambda)      (None, 19, 19, 256)  0           leaky_re_lu_21[0][0]             
__________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU)      (None, 19, 19, 1024) 0           batch_normalization_20[0][0]     
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 19, 19, 1280) 0           space_to_depth_x2[0][0]          
                                                                 leaky_re_lu_20[0][0]             
__________________________________________________________________________________________________
conv2d_22 (Conv2D)              (None, 19, 19, 1024) 11796480    concatenate_1[0][0]              
__________________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, 19, 19, 1024) 4096        conv2d_22[0][0]                  
__________________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU)      (None, 19, 19, 1024) 0           batch_normalization_22[0][0]     
__________________________________________________________________________________________________
conv2d_23 (Conv2D)              (None, 19, 19, 425)  435625      leaky_re_lu_22[0][0]             
==================================================================================================
Total params: 50,983,561
Trainable params: 50,962,889
Non-trainable params: 20,672

將模型的輸出轉換爲邊界框

yolo_outputs = yolo_head(yolo_model.output,anchors,len(class_names))

過濾錨框

scores,boxes,classes = yolo_eval(yolo_outputs,image_shape=(720.,1280.),max_boxes=10,score_threshold=0.6,iou_threshold=0.5)

在實際圖像中運行計算圖

def predict(sess,image_file,is_show_info=True,is_plot=True):
    """
    運行存儲在sess的計算圖以預測image_file的邊界框,打印出預測的圖與信息
    
    參數:
        sess - 包含了yolo計算圖的tensorflow/keras的會話
        image_file - 存儲在image文件夾下的圖片名稱
    返回:
        out_scores - tensor類型,維度爲(None,),錨框的預測的可能值
        out_boxes - tensor類型,維度爲(None,4),包含了錨框位置信息
        out_classes - tensor類型,維度爲(None,),錨框的預測的分類索引
    """
    
    #圖像預處理
    image, image_data = yolo_utils.preprocess_image("images/" + image_file, model_image_size = (608, 608))
    
    #運行會話並在feed_dict中選擇正確的佔位符.
    out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict = {yolo_model.input:image_data, K.learning_phase(): 0})
    
    #打印預測信息
    if is_show_info:
        print("在" + str(image_file) + "中找到了" + str(len(out_boxes)) + "個錨框。")
    
    #指定要繪製的邊界框的顏色
    colors = yolo_utils.generate_colors(class_names)
    
    #在圖中繪製邊界框
    yolo_utils.draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
    
    #保存已經繪製了邊界框的圖
    image.save(os.path.join("out", image_file), quality=100)
    
    #打印出已經繪製了邊界框的圖
    if is_plot:
        output_image = scipy.misc.imread(os.path.join("out", image_file))
        plt.imshow(output_image)
        
    return out_scores,out_boxes,out_classes

測試:

out_scores,out_boxes,out_classes = predict(sess,"test.jpg")

結果:

在test.jpg中找到了7個錨框。
car 0.60 (925, 285) (1045, 374)
car 0.66 (706, 279) (786, 350)
bus 0.67 (5, 266) (220, 407)
car 0.70 (947, 324) (1280, 705)
car 0.74 (159, 303) (346, 440)
car 0.80 (761, 282) (942, 412)
car 0.89 (367, 300) (745, 648)

在這裏插入圖片描述

批量繪製圖

將images文件夾中“0001.jpg”到“0120.jpg”的圖,全部繪製出來。

for i in range(1,121):
    num_fill = int(len("0000")-len(str(1))) + 1
    filename = str(i).zfill(num_fill) + ".jpg"
    print("當前文件:"+str(filename))
    
    out_scores,out_boxes,out_classes = predict(sess,filename,is_show_info=False,is_plot=False)
    
print("繪製完成")

結果:

當前文件:0001.jpg
當前文件:0002.jpg
當前文件:0003.jpg
car 0.69 (347, 289) (445, 321)
car 0.70 (230, 307) (317, 354)
car 0.73 (671, 284) (770, 315)
當前文件:0004.jpg
car 0.63 (400, 285) (515, 327)
car 0.66 (95, 297) (227, 342)
car 0.68 (1, 321) (121, 410)
car 0.72 (539, 277) (658, 318)

······我就不拷貝完啦~

當前文件:0116.jpg
traffic light 0.63 (522, 76) (543, 113)
car 0.80 (5, 271) (241, 672)
當前文件:0117.jpg
當前文件:0118.jpg
當前文件:0119.jpg
traffic light 0.61 (1056, 0) (1138, 131)
當前文件:0120.jpg
繪製完成!
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章