基於Ubuntu 18.04.3操作系統的Intel OpenVINO環境搭建

OpenVINO toolkit分爲開源版與Intel版,其中Intel版是Intel發佈的專注於推理的深度學習框架,其特點是可將TensorFlow、caffe、ONNX模型轉換爲Intel系硬件兼容的模型,包括Movidius與Movidius NCS 2。
在這裏插入圖片描述
官方安裝教程:https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux.html

註冊與下載

地址:https://software.intel.com/en-us/openvino-toolkit/choose-download
在這裏插入圖片描述
選擇Linux版
在這裏插入圖片描述
點擊註冊與下載
你會收到一封郵件,裏面有你的激活碼與下載地址

安裝

安裝OpenVINO toolkit核心組件

# 解壓縮
>>> tar -xzvf l_openvino_toolkit_p_2020.1.023.tgz
>>> cd l_openvino_toolkit_p_2020.1.023
# GUI安裝界面與CLI安裝界面
# 這裏選擇CLI安裝界面
>>> sudo ./install.sh
Welcome
--------------------------------------------------------------------------------
Welcome to the Intel® Distribution of OpenVINO™ toolkit 2020.1 for Linux*
--------------------------------------------------------------------------------
The Intel installation wizard will install the Intel® Distribution of OpenVINO™
toolkit 2020.1 for Linux* to your system.

The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and
solutions that emulate human vision. Based on Convolutional Neural Networks
(CNN), the toolkit extends computer vision (CV) workloads across Intel®
hardware, maximizing performance. The Intel Distribution of OpenVINO toolkit
includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).

Before installation please check system requirements:
https://docs.openvinotoolkit.org/2020.1/_docs_install_guides_installing_openvino
_linux.html#system_requirements
and run following script to install external software dependencies:

sudo -E ./install_openvino_dependencies.sh

Please note that after the installation is complete, additional configuration
steps are still required.

For the complete installation procedure, refer to the Installation guide:
https://docs.openvinotoolkit.org/2020.1/_docs_install_guides_installing_openvino
_linux.html.

You will complete the following steps:
   1.  Welcome
   2.  End User License Agreement
   3.  Prerequisites
   4.  Configuration
   5.  Installation
   6.  First Part of Installation is Complete

--------------------------------------------------------------------------------
Press "Enter" key to continue or "q" to quit: 
>>> Enter
* Other names and brands may be claimed as the property of others
--------------------------------------------------------------------------------
Type "accept" to continue or "decline" to go back to the previous menu: 
>>> accept
--------------------------------------------------------------------------------

   1. I consent to the collection of my Information
   2. I do NOT consent to the collection of my Information

   b. Back
   q. Quit installation

--------------------------------------------------------------------------------
Please type a selection: 
>>> 2
--------------------------------------------------------------------------------
Missing optional prerequisites
-- Intel® GPU is not detected on this machine
-- Intel® Graphics Compute Runtime for OpenCL™ Driver is missing but you will
be prompted to install later
--------------------------------------------------------------------------------
   1. Skip prerequisites [ default ]
   2. Show the detailed info about issue(s)
   3. Re-check the prerequisites

   h. Help
   b. Back
   q. Quit installation

--------------------------------------------------------------------------------
Please type a selection or press "Enter" to accept default choice [ 1 ]: 
>>> 1
Configuration > Pre-install Summary
--------------------------------------------------------------------------------
Install location:
    /opt/intel


The following components will be installed:
    Inference Engine                                                       272MB
        Inference Engine Development Kit                                    63MB
        Inference Engine Runtime for Intel® CPU                             25MB
        Inference Engine Runtime for Intel® Processor Graphics              17MB
        Inference Engine Runtime for Intel® Movidius™ VPU                  78MB
        Inference Engine Runtime for Intel® Gaussian Neural Accelerator      5MB
        Inference Engine Runtime for Intel® Vision Accelerator Design with  15MB
Intel® Movidius™ VPUs

    Model Optimizer                                                          4MB
        Model Optimizer Tool                                                 4MB

    Deep Learning Workbench                                                178MB
        Deep Learning Workbench                                            178MB

    OpenCV*                                                                118MB
        OpenCV* Libraries                                                  107MB

    Open Model Zoo                                                         117MB
        Open Model Zoo                                                     117MB

    Intel(R) Media SDK                                                     128MB
        Intel(R) Media SDK                                                 128MB

   Install space required:  668MB

--------------------------------------------------------------------------------

   1. Accept configuration and begin installation [ default ]
   2. Customize installation

   h. Help
   b. Back
   q. Quit installation

--------------------------------------------------------------------------------
Please type a selection or press "Enter" to accept default choice [ 1 ]: 
>>> 1
Prerequisites > Missing Prerequisite(s)
--------------------------------------------------------------------------------
There are one or more unresolved issues based on your system configuration and
component selection.

You can resolve all the issues without exiting the installer and re-check, or
you can exit, resolve the issues, and then run the installation again.

--------------------------------------------------------------------------------
Missing optional prerequisites
-- Intel® GPU is not detected on this machine
-- Intel® Graphics Compute Runtime for OpenCL™ Driver is missing but you will
be prompted to install later
--------------------------------------------------------------------------------
   1. Skip prerequisites [ default ]
   2. Show the detailed info about issue(s)
   3. Re-check the prerequisites

   h. Help
   b. Back
   q. Quit installation

--------------------------------------------------------------------------------
Please type a selection or press "Enter" to accept default choice [ 1 ]: 
>>> 1
First Part of Installation is Complete
--------------------------------------------------------------------------------
The first part of Intel® Distribution of OpenVINO™ toolkit 2020.1 for Linux*
has been successfully installed in 
/opt/intel/openvino_2020.1.023.

ADDITIONAL STEPS STILL REQUIRED: 

Open the Installation guide at:
 https://docs.openvinotoolkit.org/2020.1/_docs_install_guides_installing_openvin
o_linux.html 
and follow the guide instructions to complete the remaining tasks listed below:

 • Set Environment variables 
 • Configure Model Optimizer 
 • Run the Verification Scripts to Verify Installation and Compile Samples

--------------------------------------------------------------------------------
Press "Enter" key to quit: 

安裝其他相關依賴

>>> cd /opt/intel/openvino/install_dependencies
>>> sudo -E ./install_openvino_dependencies.sh
# 將會使用apt安裝一系列包

設置環境變量

臨時修改

>>> source /opt/intel/openvino/bin/setupvars.sh

寫入bashrc或zshrc

>>> cd ~
>>> vim .zshrc
source /opt/intel/openvino/bin/setupvars.sh

寫入bashrc有一個弊端,就是會將原先安裝的opencv覆蓋,因而我建議在使用OpenVINO時再臨時修改

配置Model Optimizer

爲了更好理解Model Optimizer,這裏直接翻譯官方文檔。

*Model Optimizer是一個基於Python的命令行工具,旨在導入從流行的深度學習框架訓練得到的模型,例如Caffe、TensorFlow、Apache MXNet、ONNX、Kaldi。
Model Optimizer是Intel OpenVINO toolkit的關鍵組件。已訓練好的模型無法在沒有使用Model Optimizer轉換的前提下進行推理。當你使用Model Optimizer對訓練好的模型進行轉換後,你會得到模型的Intermediate Representation(IR),Intermediate Representation通過以下兩種格式文件描述整個模型:

  • .xml:描述網絡拓撲
  • .bin:包含所有權重、偏置的二進制格式*

配置步驟

須知:

  1. 可以將所有支持的框架一併配置,或者按需進行配置;
  2. 不支持在CentOS上配置對TensorFlow的支持,原因是TensorFlow不支持CentOS;
  3. 配置過程需要網絡鏈接

配置TensorFlow

>>> cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
>>> sudo ./install_prerequisites_tf.sh

如果在多次安裝刪除tensorflow或tensorflow-gpu後,出現找不到包的情況,尤其是找不到tensorflow-gpu,可通過強制全部重新安裝:

>>> pip3 install tensorflow-gpu==1.15.2 --ignore-installed

參考:https://stackoverflow.com/a/45551934/7151777

運行驗證腳本以驗證安裝

運行圖像分類驗證腳本

該腳本將會下載SqueezeNet模型,再使用Model Optimizer轉換爲IR模型,再使用該模型對car.png進行圖像分類,輸出Top-10

>>> cd /opt/intel/openvino/deployment_tools/demo
>>> ./demo_squeezenet_download_convert_run.sh
target_precision = FP16
[setupvars.sh] OpenVINO environment initialized


###################################################



Downloading the Caffe model and the prototxt
Installing dependencies
Ign:1 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease       
Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease [88.7 kB]                                     
Hit:4 http://dl.google.com/linux/chrome/deb stable Release                                                                           
Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease                                                         
Get:6 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease [88.7 kB]
Hit:8 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease        
Hit:9 http://ppa.launchpad.net/peek-developers/stable/ubuntu bionic InRelease 
Hit:10 http://ppa.launchpad.net/transmissionbt/ppa/ubuntu bionic InRelease    
Fetched 177 kB in 2s (109 kB/s)
Reading package lists... Done
Building dependency tree       
Reading state information... Done
All packages are up to date.
Run sudo -E apt -y install build-essential python3-pip virtualenv cmake libcairo2-dev libpango1.0-dev libglib2.0-dev libgtk2.0-dev libswscale-dev libavcodec-dev libavformat-dev libgstreamer1.0-0 gstreamer1.0-plugins-base

Reading package lists... Done
Building dependency tree       
Reading state information... Done
build-essential is already the newest version (12.4ubuntu1).
libgtk2.0-dev is already the newest version (2.24.32-1ubuntu1).
virtualenv is already the newest version (15.1.0+ds-1.1).
cmake is already the newest version (3.10.2-1ubuntu2.18.04.1).
gstreamer1.0-plugins-base is already the newest version (1.14.5-0ubuntu1~18.04.1).
libcairo2-dev is already the newest version (1.15.10-2ubuntu0.1).
libglib2.0-dev is already the newest version (2.56.4-0ubuntu0.18.04.4).
libgstreamer1.0-0 is already the newest version (1.14.5-0ubuntu1~18.04.1).
libpango1.0-dev is already the newest version (1.40.14-1ubuntu0.1).
libavcodec-dev is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
libavformat-dev is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
libswscale-dev is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
python3-pip is already the newest version (9.0.1-2.3~ubuntu1.18.04.1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Reading package lists... Done
Building dependency tree       
Reading state information... Done
libpng-dev is already the newest version (1.6.34-1ubuntu0.18.04.2).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
WARNING: The directory '/home/microfat/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from -r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 1)) (3.12)
Requirement already satisfied: requests in /home/microfat/.local/lib/python3.6/site-packages (from -r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2.22.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2018.1.18)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2.6)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (1.22)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (3.0.4)
Run python3 /opt/intel/openvino_2020.1.023/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name squeezenet1.1 --output_dir /home/microfat/openvino_models/models --cache_dir /home/microfat/openvino_models/cache

################|| Downloading models ||################

========== Retrieving /home/microfat/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt from the cache

========== Retrieving /home/microfat/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel from the cache

################|| Post-processing ||################

========== Replacing text in /home/microfat/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt


Target folder /home/microfat/openvino_models/ir/public/squeezenet1.1/FP16 already exists. Skipping IR generation  with Model Optimizer.If you want to convert a model again, remove the entire /home/microfat/openvino_models/ir/public/squeezenet1.1/FP16 folder. Then run the script again



###################################################

Build Inference Engine samples

-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for C++ include unistd.h
-- Looking for C++ include unistd.h - found
-- Looking for C++ include stdint.h
-- Looking for C++ include stdint.h - found
-- Looking for C++ include sys/types.h
-- Looking for C++ include sys/types.h - found
-- Looking for C++ include fnmatch.h
-- Looking for C++ include fnmatch.h - found
-- Looking for strtoll
-- Looking for strtoll - found
-- Found InferenceEngine: /opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/lib/intel64/libinference_engine.so (Required is at least version "2.1") 
-- Configuring done
-- Generating done
-- Build files have been written to: /home/microfat/inference_engine_samples_build
Scanning dependencies of target gflags_nothreads_static
Scanning dependencies of target format_reader
[  9%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags_completions.cc.o
[ 18%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags_reporting.cc.o
[ 27%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags.cc.o
[ 36%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/bmp.cpp.o
[ 45%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/MnistUbyte.cpp.o
[ 54%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/format_reader.cpp.o
[ 63%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/opencv_wraper.cpp.o
[ 72%] Linking CXX shared library ../../intel64/Release/lib/libformat_reader.so
[ 72%] Built target format_reader
[ 81%] Linking CXX static library ../../intel64/Release/lib/libgflags_nothreads.a
[ 81%] Built target gflags_nothreads_static
Scanning dependencies of target classification_sample_async
[ 90%] Building CXX object classification_sample_async/CMakeFiles/classification_sample_async.dir/main.cpp.o
[100%] Linking CXX executable ../intel64/Release/classification_sample_async
[100%] Built target classification_sample_async


###################################################

Run Inference Engine classification sample

Run ./classification_sample_async -d CPU -i /opt/intel/openvino/deployment_tools/demo/car.png -m /home/microfat/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml

[ INFO ] InferenceEngine: 
	API version ............ 2.1
	Build .................. 37988
	Description ....... API
[ INFO ] Parsing input parameters
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /opt/intel/openvino/deployment_tools/demo/car.png
[ INFO ] Creating Inference Engine
	CPU
	MKLDNNPlugin version ......... 2.1
	Build ........... 37988

[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (787, 259) to (227, 227)
[ INFO ] Batch size is 1
[ INFO ] Loading model to the device
[ INFO ] Create infer request
[ INFO ] Start inference (10 asynchronous executions)
[ INFO ] Completed 1 async request execution
[ INFO ] Completed 2 async request execution
[ INFO ] Completed 3 async request execution
[ INFO ] Completed 4 async request execution
[ INFO ] Completed 5 async request execution
[ INFO ] Completed 6 async request execution
[ INFO ] Completed 7 async request execution
[ INFO ] Completed 8 async request execution
[ INFO ] Completed 9 async request execution
[ INFO ] Completed 10 async request execution
[ INFO ] Processing output blobs

Top 10 results:

Image /opt/intel/openvino/deployment_tools/demo/car.png

classid probability label
------- ----------- -----
817     0.6853030   sports car, sport car
479     0.1835197   car wheel
511     0.0917197   convertible
436     0.0200694   beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
751     0.0069604   racer, race car, racing car
656     0.0044177   minivan
717     0.0024739   pickup, pickup truck
581     0.0017788   grille, radiator grille
468     0.0013083   cab, hack, taxi, taxicab
661     0.0007443   Model T

[ INFO ] Execution successful

[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool


###################################################

Demo completed successfully.

如果運行過程中出現各種連接超時('ReadTimeoutError("HTTPSConnectionPool(host=‘pypi.org’, port=443)),嘗試修改源、或者使用p104

運行推理管道驗證腳本

該腳本將會下載三個訓練好的IR模型,腳本流程爲:首先使用汽車檢測模型檢測出汽車;再將輸出輸入至車型檢測模型,檢測出汽車特徵,包括車顏色與車牌;再將輸出輸入至車牌識別模型,識別出車牌字符。

>>> ./demo_security_barrier_camera.sh
Ign:1 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease       
Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease [88.7 kB]          
Hit:4 http://dl.google.com/linux/chrome/deb stable Release                                                
Hit:6 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease                               
Hit:7 http://ppa.launchpad.net/peek-developers/stable/ubuntu bionic InRelease                       
Hit:8 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease                        
Hit:9 http://ppa.launchpad.net/transmissionbt/ppa/ubuntu bionic InRelease
Get:10 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease [88.7 kB]
Fetched 177 kB in 2s (94.5 kB/s)                                
Reading package lists... Done
Building dependency tree       
Reading state information... Done
All packages are up to date.
Run sudo -E apt -y install build-essential python3-pip virtualenv cmake libcairo2-dev libpango1.0-dev libglib2.0-dev libgtk2.0-dev libswscale-dev libavcodec-dev libavformat-dev libgstreamer1.0-0 gstreamer1.0-plugins-base

Reading package lists... Done
Building dependency tree       
Reading state information... Done
build-essential is already the newest version (12.4ubuntu1).
libgtk2.0-dev is already the newest version (2.24.32-1ubuntu1).
virtualenv is already the newest version (15.1.0+ds-1.1).
cmake is already the newest version (3.10.2-1ubuntu2.18.04.1).
gstreamer1.0-plugins-base is already the newest version (1.14.5-0ubuntu1~18.04.1).
libcairo2-dev is already the newest version (1.15.10-2ubuntu0.1).
libglib2.0-dev is already the newest version (2.56.4-0ubuntu0.18.04.4).
libgstreamer1.0-0 is already the newest version (1.14.5-0ubuntu1~18.04.1).
libpango1.0-dev is already the newest version (1.40.14-1ubuntu0.1).
libavcodec-dev is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
libavformat-dev is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
libswscale-dev is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
python3-pip is already the newest version (9.0.1-2.3~ubuntu1.18.04.1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Reading package lists... Done
Building dependency tree       
Reading state information... Done
libpng-dev is already the newest version (1.6.34-1ubuntu0.18.04.2).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
WARNING: The directory '/home/microfat/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from -r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 1)) (3.12)
Requirement already satisfied: requests in /home/microfat/.local/lib/python3.6/site-packages (from -r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2.22.0)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (1.22)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2.6)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2018.1.18)
[setupvars.sh] OpenVINO environment initialized


###################################################

Downloading Intel models

target_precision = FP16
Run python3 /opt/intel/openvino_2020.1.023/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name vehicle-license-plate-detection-barrier-0106 --output_dir /home/microfat/openvino_models/ir --cache_dir /home/microfat/openvino_models/cache

################|| Downloading models ||################

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106.bin from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.bin from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP32-INT8/vehicle-license-plate-detection-barrier-0106.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP32-INT8/vehicle-license-plate-detection-barrier-0106.bin from the cache

################|| Post-processing ||################

Run python3 /opt/intel/openvino_2020.1.023/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name license-plate-recognition-barrier-0001 --output_dir /home/microfat/openvino_models/ir --cache_dir /home/microfat/openvino_models/cache

################|| Downloading models ||################

========== Retrieving /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP32/license-plate-recognition-barrier-0001.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP32/license-plate-recognition-barrier-0001.bin from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.bin from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP32-INT8/license-plate-recognition-barrier-0001.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP32-INT8/license-plate-recognition-barrier-0001.bin from the cache

################|| Post-processing ||################

Run python3 /opt/intel/openvino_2020.1.023/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name vehicle-attributes-recognition-barrier-0039 --output_dir /home/microfat/openvino_models/ir --cache_dir /home/microfat/openvino_models/cache

################|| Downloading models ||################

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP32/vehicle-attributes-recognition-barrier-0039.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP32/vehicle-attributes-recognition-barrier-0039.bin from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.bin from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP32-INT8/vehicle-attributes-recognition-barrier-0039.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP32-INT8/vehicle-attributes-recognition-barrier-0039.bin from the cache

################|| Post-processing ||################



###################################################

Build Inference Engine demos

-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for C++ include unistd.h
-- Looking for C++ include unistd.h - found
-- Looking for C++ include stdint.h
-- Looking for C++ include stdint.h - found
-- Looking for C++ include sys/types.h
-- Looking for C++ include sys/types.h - found
-- Looking for C++ include fnmatch.h
-- Looking for C++ include fnmatch.h - found
-- Looking for C++ include stddef.h
-- Looking for C++ include stddef.h - found
-- Check size of uint32_t
-- Check size of uint32_t - done
-- Looking for strtoll
-- Looking for strtoll - found
-- Found OpenCV: /opt/intel/openvino_2020.1.023/opencv (found version "4.2.0") found components:  core imgproc 
-- Found InferenceEngine: /opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/lib/intel64/libinference_engine.so (Required is at least version "2.0") 
-- Configuring done
-- Generating done
-- Build files have been written to: /home/microfat/inference_engine_demos_build
[ 40%] Built target gflags_nothreads_static
[ 80%] Built target monitors
[100%] Built target security_barrier_camera_demo


###################################################

Run Inference Engine security_barrier_camera demo

Run ./security_barrier_camera_demo -d CPU -d_va CPU -d_lpr CPU -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml -m_lpr /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml -m_va /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml

[ INFO ] InferenceEngine: 0x7f4e28e1e040
[ INFO ] Files were added: 1
[ INFO ]     /opt/intel/openvino/deployment_tools/demo/car_1.bmp
[ INFO ] Loading device CPU
	CPU
	MKLDNNPlugin version ......... 2.1
	Build ........... 37988

[ INFO ] Loading detection model to the CPU plugin
[ INFO ] Loading Vehicle Attribs model to the CPU plugin
[ INFO ] Loading Licence Plate Recognition (LPR) model to the CPU plugin
[ INFO ] Number of InferRequests: 1 (detection), 3 (classification), 3 (recognition)
[ INFO ] 4 streams for CPU
[ INFO ] Display resolution: 1920x1080
[ INFO ] Number of allocated frames: 3
[ INFO ] Resizable input with support of ROI crop and auto resize is disabled
0.2FPS for (3 / 1) frames
Detection InferRequests usage: 0.0%

[ INFO ] Execution successful


###################################################

Demo completed successfully.

在這裏插入圖片描述
首次運行會下載模型,如果出現模型文件下載速度過慢,可通過p104加速,但在後期會報錯:

>>> p104 ./demo_security_barrier_camera.sh
CMake Error at /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
  Some of mandatory Inference Engine components are not found.  Please
  consult InferenceEgnineConfig.cmake module's help page.  (missing:
  IE_RELEASE_LIBRARY IE_C_API_RELEASE_LIBRARY IE_NN_BUILDER_RELEASE_LIBRARY)
  (Required is at least version "2.0")
Call Stack (most recent call first):
  /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE)
  /opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/share/InferenceEngineConfig.cmake:99 (find_package_handle_standard_args)
  CMakeLists.txt:213 (find_package)


CMake Error at CMakeLists.txt:213 (find_package):
  Found package configuration file:

    /opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/share/InferenceEngineConfig.cmake

  but it set InferenceEngine_FOUND to FALSE so package "InferenceEngine" is
  considered to be NOT FOUND.


-- Configuring incomplete, errors occurred!
See also "/home/microfat/inference_engine_demos_build/CMakeFiles/CMakeOutput.log".
Error on or near line 188; exiting with status 1

因而可通過p104下載完模型後,再去掉p104重新執行腳本。

配置神經網絡加速棒

配置

添加當前用戶到users組

>>> sudo usermod -a -G users "$(whoami)"

註銷重新登錄以生效
安裝USB規則

>>> sudo cp /opt/intel/openvino/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/
>>> sudo udevadm control --reload-rules
>>> sudo udevadm trigger
>>> sudo ldconfig

重啓以生效

測試

使用我在之前寫的Movidius + Raspberry Pi實時目標檢測中的代碼進行測試,代碼地址:https://github.com/MacwinWin/raspberry_pi_object_detection.git

# clone
>>> git clone https://github.com/MacwinWin/raspberry_pi_object_detection.git
>>> cd raspberry_pi_object_detection
# checkout到最新版本分支
>>> git checkout 1.2
>>> python3 openvino_video_object_detection.py --prototxt MobileNetSSD_deploy.prototxt --model MobileNetSSD_deploy.caffemodel --video /home/pi/Git/raspberry_pi_object_detection/airbus.mp4 --device NCS
[INFO] loading model...
[INFO] starting video stream...
Traceback (most recent call last):
  File "openvino_video_object_detection.py", line 55, in <module>
    frame = imutils.resize(frame, width=400)
  File "/home/microfat/.local/lib/python3.6/site-packages/imutils/convenience.py", line 69, in resize
    (h, w) = image.shape[:2]
AttributeError: 'NoneType' object has no attribute 'shape'

如果出現上述錯誤,則是因爲opencv沒有切換到openvino版的opencv,此時只需通過設置環境變量,臨時改變opencv版本即可

>>> source /opt/intel/openvino/bin/setupvars.sh

再重新運行,效果如下如
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章