TensorFlow-Serving的使用實戰案例筆記(tf=1.4)

最近在測試一些通用模型+項目,包括:CLUE(tf+pytorch),bert4keras(keras), Kashgari(keras+tf)等。其中如果要部署的話,就有tensorflow-serving和flask的選擇了。
這裏剛好有一個非常好的實戰例子,基於tensorflow 1.x的,比較全面。


在這裏插入圖片描述



參考博客:Deploying Keras models using TensorFlow Serving and Flask
中文版:使用 TensorFlow Serving 和 Flask 部署 Keras 模型
github:keras-and-tensorflow-serving
官方教程: TensorFlow Serving

具體細節直接看教程,來看幾個關鍵內容。


1 安裝 TensorFlow Serving

有幾種啓動ts的方式,docker也有tensorflow_model_server,筆者覺得後者比較省力。

$ apt install curl

$ echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -

$ apt-get update

$ apt-get install tensorflow-model-server

$ tensorflow_model_server --version
TensorFlow ModelServer: 1.10.0-dev
TensorFlow Library: 1.11.0

$ python  --version
Python 3.6.6

從github:keras-and-tensorflow-serving中把代碼都拉下來以備後用。

其中,

(tensorflow) ubuntu@Himanshu:~/Desktop/Medium/keras-and-tensorflow-serving$ tree -c
└── keras-and-tensorflow-serving
    ├── README.md
    ├── my_image_classifier
    │   └── 1
    │       ├── saved_model.pb
    │       └── variables
    │           ├── variables.data-00000-of-00001
    │           └── variables.index
    ├── test_images
    │   ├── car.jpg
    │   └── car.png
    ├── flask_server
    │   ├── app.py
    │   ├── flask_sample_request.py
    └── scripts
        ├── download_inceptionv3_model.py
        ├── inception.h5
        ├── auto_cmd.py
        ├── export_saved_model.py
        ├── imagenet_class_index.json
        └── serving_sample_request.py
6 directories, 15 files

還有一種就是docker 部署的方式:

sudo nvidia-docker run -p 8500:8500 \
  -v /home/projects/resnet/weights/:/models \
  --name resnet50 \
  -itd --entrypoint=tensorflow_model_server tensorflow/serving:2.0.0-gpu \
  --port=8500 --per_process_gpu_memory_fraction=0.5 \
  --enable_batching=true --model_name=resnet --model_base_path=/models/season &

參考:TensorFlow Serving + Docker + Tornado機器學習模型生產級快速部署

2 keras-H5格式轉變爲tensorflow-pb + 模型熱更新

2.1 keras-H5格式轉變爲tensorflow-pb

詳見 export_saved_model.py

import tensorflow as tf

# The export path contains the name and the version of the model
tf.keras.backend.set_learning_phase(0)  # Ignore dropout at inference
model = tf.keras.models.load_model('./inception.h5')
export_path = '../my_image_classifier/1'

# Fetch the Keras session and save the model
# The signature definition is defined by the input and output tensors
# And stored with the default serving key
with tf.keras.backend.get_session() as sess:
    tf.saved_model.simple_save(
        sess,
        export_path,
        inputs={'input_image': model.input},
outputs={t.name: t for t in model.outputs})

其中,尤其要注意{'input_image': model.input},後面ts啓動之後,輸入給ts的內容要與這個相同。

如果你的tf版本是2.0以上,那麼model.save()的時候就可以直接選擇格式save_format='tf'

from keras import backend as K
from keras.models import load_model
import tensorflow as tf

# 首先使用tf.keras的load_model來導入模型h5文件
model_path = 'v7_resnet50_19-0.9068-0.8000.h5'
model = tf.keras.models.load_model(model_path, custom_objects=dependencies)
model.save('models/resnet/', save_format='tf')  # 導出tf格式的模型文件

注意,這裏要使用tf.keras.models.load_model來導入模型,不能使用keras.models.load_model,只有tf.keras.models.load_model能導出成tfs所需的模型文件。
以往導出keras模型需要寫一大段定義builder的代碼,如文章《keras、tensorflow serving踩坑記》 的那樣,現在只需使用簡單的model.save就可以導出了。

2.2 熱更新

TensorFlow Serving 支持熱更新模型,其典型的模型文件夾結構如下:

/saved_model_files
    /1      # 版本號爲1的模型文件
        /assets
        /variables
        saved_model.pb
    ...
    /N      # 版本號爲N的模型文件
        /assets
        /variables
        saved_model.pb

上面 1~N 的子文件夾代表不同版本號的模型。
當指定 --model_base_path 時,只需要指定根目錄的 絕對地址 (不是相對地址)即可。
例如,如果上述文件夾結構存放在 home/snowkylin 文件夾內,則 --model_base_path 應當設置爲 home/snowkylin/saved_model_files (不附帶模型版本號)。
TensorFlow Serving 會自動選擇版本號最大的模型進行載入。

我們可以這樣做:

  • 在新的 keras 模型上運行相同的腳本。
  • 在 export_saved_model.py 中更新 export_path = ‘../my_image_classifier/1’export_path = ‘../my_image_classifier/2’

TensorFlow Serving 會自動檢測出 my_image_classifier 目錄下模型的新版本,並在服務器中更新它。

3 啓動tensorflow_model_server

tensorflow_model_server \
    --rest_api_port=端口號(如8501) \
    --model_name=模型名 \
    --model_base_path="SavedModel格式模型的文件夾絕對地址(不含版本號)"

文中的案例是圖像分類:

tensorflow_model_server --model_base_path=/home/ubuntu/Desktop/Medium/keras-and-tensorflow-serving/my_image_classifier --rest_api_port=9000 --model_name=ImageClassifier
  • –rest_api_port:TensorFlow Serving 會在 8500 端口啓動一個 gRPC ModelServer,並且 RESET API 可在 9000 端口調用。
  • --model_name:這是你用於發送 POST 請求的服務器的名稱。你可以輸入任何名稱。
    

如果成功了之後:

2018-02-08 16:28:02.641662: I tensorflow_serving/model_servers/main.cc:149] Building single TensorFlow model file config:  model_name: voice model_base_path: /home/yu/workspace/test/test_model/
2018-02-08 16:28:02.641917: I tensorflow_serving/model_servers/server_core.cc:439] Adding/updating models.
2018-02-08 16:28:02.641976: I tensorflow_serving/model_servers/server_core.cc:490]  (Re-)adding model: voice
2018-02-08 16:28:02.742740: I tensorflow_serving/core/basic_manager.cc:705] Successfully reserved resources to load servable {name: voice version: 1}
2018-02-08 16:28:02.742800: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: voice version: 1}
2018-02-08 16:28:02.742815: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: voice version: 1}
2018-02-08 16:28:02.742867: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:360] Attempting to load native SavedModelBundle in bundle-shim from: /home/yu/workspace/test/test_model/1
2018-02-08 16:28:02.742906: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:236] Loading SavedModel from: /home/yu/workspace/test/test_model/1
2018-02-08 16:28:02.755299: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-02-08 16:28:02.795329: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:155] Restoring SavedModel bundle.
2018-02-08 16:28:02.820146: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:190] Running LegacyInitOp on SavedModel bundle.
2018-02-08 16:28:02.832832: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:284] Loading SavedModel: success. Took 89481 microseconds.
2018-02-08 16:28:02.834804: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: voice version: 1}
2018-02-08 16:28:02.836855: I tensorflow_serving/model_servers/main.cc:290] Running ModelServer at 0.0.0.0:8500 ...

4 測試 TensorFlow Serving 服務

在這裏插入圖片描述
腳本 serving_sample_request.py向 TensorFlow Serving 服務發送一個 POST 請求。

其中,
服務器 URI: http://服務器地址:端口號/v1/models/模型名:predict
請求內容:

{
    "signature_name": "需要調用的函數簽名(Sequential模式不需要)",
    "instances": 輸入數據
}

回覆爲:

{
    "predictions": 返回值
}
import argparse
import json

import numpy as np
import requests
from keras.applications import inception_v3
from keras.preprocessing import image

# Argument parser for giving input image_path from command line
# ap = argparse.ArgumentParser()
# ap.add_argument("-i", "--image", required=True,
#                 help="path of the image")
# args = vars(ap.parse_args())

image_path = 'test_images/car.png'

# Preprocessing our input image
img = image.img_to_array(image.load_img(image_path, target_size=(224, 224))) / 255.

# this line is added because of a bug in tf_serving(1.10.0-dev)
img = img.astype('float16')


payload = {
    "instances": [{'input_image': img.tolist()}]
}


# sending post request to TensorFlow Serving server
r = requests.post('http://localhost:9000/v1/models/ImageClassifier:predict', json=payload)
pred = json.loads(r.content.decode('utf-8'))

# Decoding the response
# decode_predictions(preds, top=5) by default gives top 5 results
# You can pass "top=10" to get top 10 predicitons
print(json.dumps(inception_v3.decode_predictions(np.array(pred['predictions']))[0]))

輸出的結果爲:

Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/imagenet_class_index.json
40960/35363 [==================================] - 1s 20us/step
[["n04285008", "sports_car", 0.998413682], ["n04037443", "racer", 0.00140099635], ["n03459775", "grille", 0.000160793832], ["n02974003", "car_wheel", 9.57861539e-06], ["n03100240", "convertible", 6.01583724e-06]]

5 爲什麼需要 Flask 服務

這裏只是截取一下ts + flask聯合使用的好處。

如你所見,我們已經在 serving_sample_request.py (前端調用者)執行了一些圖像預處理步驟。以下是在 TensorFlow serving 服務層之上創建 Flask 服務的原因:

  • 當我們向前端團隊提供 API 時,我們需要確保他們不被預處理的技術細節淹沒。
  • 我們可能並不總是有 Python 後段服務器(比如:node.js 服務器),因此使用 numpy 和 keras 庫進行預處理可能會很麻煩。
  • 如果我們打算提供多個模型,那麼我們不得不創建多個 TensorFlow Serving 服務並且在前端代碼添加新的 URL。但 Flask 服務會保持域 URL 相同,而我們只需要添加一個新的路由(一個函數)。
  • 可以在 Flask 應用中執行基於訂閱的訪問、異常處理和其他任務。
    在這裏插入圖片描述
    Flask 服務只需要一個flask_server/app.py文件。
import base64
import json
from io import BytesIO

import numpy as np
import requests
from flask import Flask, request, jsonify
from keras.applications import inception_v3
from keras.preprocessing import image

# from flask_cors import CORS

app = Flask(__name__)


# Uncomment this line if you are making a Cross domain request
# CORS(app)

# Testing URL
@app.route('/hello/', methods=['GET', 'POST'])
def hello_world():
    return 'Hello, World!'


@app.route('/imageclassifier/predict/', methods=['POST'])
def image_classifier():
    # Decoding and pre-processing base64 image
    img = image.img_to_array(image.load_img(BytesIO(base64.b64decode(request.form['b64'])),
                                            target_size=(224, 224))) / 255.

    # this line is added because of a bug in tf_serving(1.10.0-dev)
    img = img.astype('float16')

    # Creating payload for TensorFlow serving request
    payload = {
        "instances": [{'input_image': img.tolist()}]
    }

    # Making POST request
    r = requests.post('http://localhost:9000/v1/models/ImageClassifier:predict', json=payload)

    # Decoding results from TensorFlow Serving server
    pred = json.loads(r.content.decode('utf-8'))

    # Returning JSON response to the frontend
    return jsonify(inception_v3.decode_predictions(np.array(pred['predictions']))[0])

6 ts + flask 一鍵自動部署

auto_cmd.py 是一個用於自動啓動和停止這兩個服務(TensorFlow Serving 和 Falsk)的腳本。你可以修改這個腳本適用兩個以上的服務。

import os
import signal
import subprocess

# Making sure to use virtual environment libraries
activate_this = "/home/ubuntu/tensorflow/bin/activate_this.py"
exec(open(activate_this).read(), dict(__file__=activate_this))

# Change directory to where your Flask's app.py is present
os.chdir("/home/ubuntu/Desktop/Medium/keras-and-tensorflow-serving/flask_server")
tf_ic_server = ""
flask_server = ""

try:
    tf_ic_server = subprocess.Popen(["tensorflow_model_server "
                                     "--model_base_path=/home/ubuntu/Desktop/Medium/keras-and-tensorflow-serving/my_image_classifier "
                                     "--rest_api_port=9000 --model_name=ImageClassifier"],
                                    stdout=subprocess.DEVNULL,
                                    shell=True,
                                    preexec_fn=os.setsid)
    print("Started TensorFlow Serving ImageClassifier server!")

    flask_server = subprocess.Popen(["export FLASK_ENV=development && flask run --host=0.0.0.0"],
                                    stdout=subprocess.DEVNULL,
                                    shell=True,
                                    preexec_fn=os.setsid)
    print("Started Flask server!")

    while True:
        print("Type 'exit' and press 'enter' OR press CTRL+C to quit: ")
        in_str = input().strip().lower()
        if in_str == 'q' or in_str == 'exit':
            print('Shutting down all servers...')
            os.killpg(os.getpgid(tf_ic_server.pid), signal.SIGTERM)
            os.killpg(os.getpgid(flask_server.pid), signal.SIGTERM)
            print('Servers successfully shutdown!')
            break
        else:
            continue
except KeyboardInterrupt:
    print('Shutting down all servers...')
    os.killpg(os.getpgid(tf_ic_server.pid), signal.SIGTERM)
    os.killpg(os.getpgid(flask_server.pid), signal.SIGTERM)
    print('Servers successfully shutdown!')

第 10 行中的路徑使其指向你的 app.py 所在目錄。你可能還需要修改第 6 行使其指向你的虛擬環境的 bin。

7 flask + ts的測試

# importing the requests library
import argparse
import base64

import requests

# defining the api-endpoint
API_ENDPOINT = "http://localhost:5000/imageclassifier/predict/"

# taking input image via command line
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
                help="path of the image")
args = vars(ap.parse_args())

image_path = args['image']
b64_image = ""
# Encoding the JPG,PNG,etc. image to base64 format
with open(image_path, "rb") as imageFile:
    b64_image = base64.b64encode(imageFile.read())

# data to be sent to api
data = {'b64': b64_image}

# sending post request and saving response as response object
r = requests.post(url=API_ENDPOINT, data=data)

# extracting the response
print("{}".format(r.text))

輸出:

$ python flask_sample_request.py -i ../test_images/car.png
[
  [
    "n04285008", 
    "sports_car", 
    0.998414
  ], 
  [
    "n04037443", 
    "racer", 
    0.00140099
  ], 
  [
    "n03459775", 
    "grille", 
    0.000160794
  ], 
  [
    "n02974003", 
    "car_wheel", 
    9.57862e-06
  ], 
  [
    "n03100240", 
    "convertible", 
    6.01581e-06
  ]
]

如果需要處理跨域 HTTP 請求,需要在 app.py 中啓用 Flask-CORS。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章