DeepStream初步學習

一、簡介

什麼是DeepStream應用程序?

DeepStream應用程序將深度神經網絡和其他複雜的處理任務引入到流處理管道中,以實現對視頻和其他傳感器數據的近實時分析。從這些傳感器中提取有意義的見解爲提高運營效率和安全性創造了機會。例如,攝像頭是當前使用最多的物聯網傳感器。在我們的家中,街道上,停車場,大型購物中心,倉庫,工廠中都可以找到相機–無處不在。視頻分析的潛在用途是巨大的:訪問控制,防止丟失,自動結帳,監視,安全,自動檢查(QA),包裹分類(智能物流),交通控制/工程,工業自動化等。
在這裏插入圖片描述
更具體地說,DeepStream應用程序是一組模塊化插件,這些插件相連接以形成處理管道。每個插件代表一個功能塊,例如,使用TensorRT進行推理或多流解碼。硬件加速的插件與基礎硬件(如果適用)交互以提供最佳性能。例如,解碼插件與NVDEC交互,推理插件與GPU或DLA交互。每個插件可以根據需要在管道中實例化多次。
在這裏插入圖片描述在這裏插入圖片描述

什麼是DeepStream SDK?

NVIDIA DeepStream SDK是基於開源GStreamer多媒體框架的流分析工具包。DeepStream SDK加快了可伸縮IVA應用程序的開發速度,使開發人員更容易構建核心深度學習網絡,而不必從頭開始設計端到端應用程序。包含NVIDIA Jetson模塊或NVIDIA dGPU適配器的系統均支持該SDK。它由可擴展的硬件加速插件集合組成,這些插件與低級庫進行交互以優化性能,並定義了標準化的元數據結構,可實現自定義/用戶特定的添加。

有關DeepStream SDK的更多詳細信息和說明,請參考以下材料:
NVIDIA DeepStream SDK開發指南
NVIDIA DeepStream插件手冊
NVIDIA DeepStream SDK API參考文檔

DeepStream SDK參考應用程序

DeepStream SDK隨附了多個測試應用程序,包括預訓練的模型,示例配置文件和可用於運行這些應用程序的示例視頻流。其他示例和源代碼示例爲大多數IVA用例提供了足夠的信息,以加快開發工作。測試應用程序演示:

如何使用DeepStream元素(例如,獲取源代碼,對多個流進行解碼和多路複用,在經過預訓練的模型上進行推理,對圖像進行註釋和渲染)
如何生成一批幀並對其進行推斷以提高資源利用率
如何將自定義/特定於用戶的元數據添加到DeepStream的任何組件中
以及更多… 有關完整的詳細信息,請參閱《 NVIDIA DeepStream SDK開發指南》。

GStreamer插件:

GStreamer是用於插件,數據流和媒體類型處理/協商的框架。它用於創建流媒體應用程序。插件是在運行時動態加載的共享庫,可以獨立擴展和升級。當安排並鏈接在一起時,插件形成處理流水線,該流水線定義了流媒體應用程序的數據流。您可以通過其廣泛的在線文檔,從“什麼是GStreamer?”開始瞭解有關GStreamer的更多信息。
除了在GStreamer框架庫中提供的開源插件之外,DeepStream SDK還包括利用GPU功能的NVIDIA硬件加速插件。有關DeepStream GStreamer插件的完整列表,請參見《 NVIDIA DeepStream插件手冊》。

開源GStreamer插件:

  • GstFileSrc-從文件中讀取數據:視頻數據或圖像。
  • GstH264Parse-解析傳入的H264流。對於H265編解碼器,請使用H265Parse。
  • GstRtpH264Pay-將H264編碼的有效負載轉換爲RTP數據包(RFC 3984)。
  • GstUDPSink-將UDP數據包發送到網絡。與RTP有效負載(GstRtpH264Pay)配對時,它可以實現RTP流。
  • GstCapsFilter-在不修改數據的情況下限制數據格式。
  • GstV4l2Src-從v4l2設備捕獲視頻。
  • GstQTMux-將流(音頻和視頻)合併到QuickTime(.mov)文件中。
  • GstFileSink-將傳入數據寫入本地文件系統中的文件。
  • GstURIDecodeBin-將數據從URI解碼到原始媒體中。它選擇可以處理給定“ uri”方案的源元素,並將其連接到解碼器。

NVIDIA硬件加速插件:

  • Gst-nvstreammux-在發送AI推理之前批處理流。
  • Gst-nvinfer-使用TensorRT運行推理。
  • Gst-nvvideo4linux2-使用硬件加速解碼器(NVDEC)解碼視頻流;使用硬件加速編碼器(NVENC)將I420格式的RAW數據編碼爲H264或H265輸出視頻流。
  • Gst-nvvideoconvert-執行視頻顏色格式轉換。Gst-nvdsosd插件之前的第一個Gst-nvvideoconvert插件將流數據從I420轉換爲RGBA,Gst-nvdsosd插件將Gst-nvdsosd插件將數據從RGBA轉換爲I420。
  • Gst-nvdsosd-繪製邊界框,文本和關注區域(ROI)多邊形。
  • Gst-nvtracker-跟蹤幀之間的對象。
  • Gst-nvmultistreamtiler-從批處理緩衝區合成2D切片。
  • Gst-nvv4l2decoder-解碼視頻流。
  • Gst-Nvv4l2h264enc-編碼視頻流。
  • Gst-NvArgusCameraSrc-提供使用Argus API控制ISP屬性的選項。

視頻分析

使用DeepStream SDK將視頻轉換爲分析數據

DeepStream SDK通用流分析架構定義了可擴展的視頻處理管道,可用於執行推理,對象跟蹤和報告。當DeepStream應用程序分析每個視頻幀時,插件會提取信息並將其存儲爲級聯元數據記錄的一部分,從而保持記錄與源幀的關聯。管道末尾的完整元數據集合表示深度學習模型和其他分析插件從框架中提取的完整信息集。DeepStream應用程序可以使用此信息進行顯示,也可以作爲消息的一部分在外部進行傳輸,以進行進一步的分析或長期歸檔。
在這裏插入圖片描述
DeepStream對元數據使用可擴展的標準結構。基本的元數據結構NvDsBatchMeta始於在所需Gst-nvstreammux插件內部創建的批處理級元數據。輔助元數據結構保存框架,對象,分類器和標籤數據。DeepStream還提供了一種在批處理,框架或對象級別添加用戶特定的元數據的機制。
在這裏插入圖片描述
有關元數據結構和使用的更多信息,請參閱《 DeepStream插件手冊》。

ps:有需要的話可能會用到JupyterLab
Jetson 安裝 Jupyter lab
TX2入門教程軟件篇-安裝Jupyter


二、下載

官網:https://developer.nvidia.com/deepstream-download
下載v4.0版本(需要登錄):https://developer.nvidia.com/deepstream-402-jetson-deb
我實際使用的deepstream是在Jetson Xavier上的deepstream,是刷JetPack-4.3系統時一起刷入的,同樣也是v4.0.2版本。


三、官方demo

1. 直接運行官方demo
# 我是在Xavier上研究的,deepstream路徑在/opt/nvidia下
$ cd /opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app
# 首次運行僅僅會生成模型文件,需要二次運行
$ deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt
$ deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt
# 當然,我們也可以用:source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt

在這裏插入圖片描述

# 我們還可以查看該配置文件:
$ vim source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

# 可以看到使用的視頻源
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://../../streams/sample_1080p_h264.mp4
num-sources=8
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

# 根據瞭解到的消息,如果運行之後沒有任何日誌輸出,可以修改sink0裏的type改成2、sync改成0(這些參數是開發人員一個一個修改出來的)
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=5
sync=1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1

# 當然你也可以打開sink1(enable=1),可以保存爲out.mp4文件。
[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

配置文件內容明細請查閱:https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_app_config.3.2.html%23wwpID0E0WB0HA

示例/配置:參考應用程序的配置文件

  • source30_1080p_resnet_dec_infer_tiled_display_int8.txt:演示具有主要推理功能的30個流解碼。(僅適用於dGPU和Jetson AGX Xavier平臺。)
  • source4_1080p_resnet_dec_infer_tiled_display_int8.txt:演示具有主要推理,對象跟蹤和三個不同輔助分類器的四個流解碼。(僅適用於dGPU和Jetson AGX Xavier平臺。)
  • source4_1080p_resnet_dec_infer_tracker_sgie_tiled_display_int8_gpu1.txt:在GPU 1上針對主要推理,對象跟蹤和三個不同的二級分類器演示四個流解碼(對於具有多個GPU卡的系統)。僅適用於dGPU平臺。
  • config_infer_primary.txt:將 nvinfer元素配置爲主要檢測器。
  • config_infer_secondary_carcolor.txt, config_infer_secondary_carmake.txt, config_infer_secondary_vehicletypes.txt:將 nvinfer元素配置爲輔助分類器。
  • iou_config.txt:配置一個低級的IOU(聯合路口)跟蹤器。
  • source1_usb_dec_infer_resnet_int8.txt:演示一臺USB攝像機作爲輸入。
  • source1_csi_dec_infer_resnet_int8.txt:演示一個CSI攝像機作爲輸入;僅限於Jetson。
  • source2_csi_usb_dec_infer_resnet_int8.txt:演示一臺CSI攝像機和一臺USB攝像機作爲輸入;僅限於Jetson。
  • source6_csi_dec_infer_resnet_int8.txt:演示六個CSI攝像機作爲輸入;僅限於Jetson。
  • source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt:演示8解碼+推斷+跟蹤器;僅適用於Jetson Nano。
  • source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx1.txt:演示8解碼+推斷+跟蹤器;僅適用於Jetson TX1。
  • source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt:演示12個解碼+推斷+跟蹤器;僅適用於Jetson TX2。
2. 修改配置文件來調用rtsp攝像頭

使用下邊這個配置文件,可以調用自己的網絡攝像頭:
其中做過的修改:

  • [tiled-display]下修改了輸出顯示:
    rows=1
    columns=1
  • [source0]下修改了type類型、視頻源、個數:
    type=4
    uri=rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=0
    num-sources=1
  • [sink0]修改了type=2(經測試應該是視頻輸出變爲了窗口型,nomachine也能看到效果了)
    type=2
  • [sink1]修改了enable=1和輸出文件名
    enable=1
    output-file=out_toson.mp4
# Copyright (c) 2019 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=0
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
#qos=0
nvbuf-memory-type=0
#overlay-id=1

[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out_toson.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=8
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
batch-size=8
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_nano.txt

[tracker]
enable=1
tracker-width=480
tracker-height=272
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0

[tests]
file-loop=0

三、例程源碼

1. 在源文件目錄中找到源碼,我們可以自己編譯並運行demo
# 源文件目錄(我們可以自己編譯並運行demo)
$ cd /opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test1
$ ls
deepstream_test1_app.c  dstest1_pgie_config.txt  Makefile  README
$ make
$ ls
deepstream-test1-app    deepstream_test1_app.o   Makefile
deepstream_test1_app.c  dstest1_pgie_config.txt  README
# 運行deepstream-test1-app
$ ./deepstream-test1-app /opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.h264
# demo裏的視頻圖像文件
$ cd /opt/nvidia/deepstream/deepstream-4.0/samples/streams
$ ls
sample_1080p_h264.mp4  sample_720p.h264  sample_720p.mjpeg  sample_cam6.mp4
sample_1080p_h265.mp4  sample_720p.jpg   sample_720p.mp4    sample_industrial.jpg
Stream Type of Stream
sample_1080p_h264.mp4 H264 containerized stream
sample_1080p_h265.mp4 H265 containerized stream
sample_720p.h264 H264 elementary stream
sample_720p.jpg JPEG image
sample_720p.mjpeg MJPEG stream
sample_cam6.mp4 H264 containerized stream (360D camera stream)
sample_industrial.jpg JPEG image

在這裏插入圖片描述
日誌信息:

Now playing: /opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.h264

Using winsys: x11 
Opening in BLOCKING MODE 
Creating LL OSD context new
0:00:01.658822508 30700   0x558a9bcd20 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:01:03.125062075 30700   0x558a9bcd20 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Running...
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Creating LL OSD context new
Frame Number = 0 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 1 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 2 Number of objects = 5 Vehicle Count = 3 Person Count = 2
..
Frame Number = 3 Number of objects = 7 Vehicle Count = 4 Person Count = 3
Frame Number = 4 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Frame Number = 1441 Number of objects = 0 Vehicle Count = 0 Person Count = 0
End of stream
Returned, stopping playback
Deleting pipeline
2. 源文件

(其中Makefile和dstest1_pgie_config.txt略有修改,include文件我是拷貝出來的,模型使用絕對地址)

deepstream_test1_app.c

/*
 * Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
 * DEALINGS IN THE SOFTWARE.
 */

#include <gst/gst.h>
#include <glib.h>
#include <stdio.h>
#include "gstnvdsmeta.h"

#define MAX_DISPLAY_LEN 64

#define PGIE_CLASS_ID_VEHICLE 0
#define PGIE_CLASS_ID_PERSON 2

/* The muxer output resolution must be set if the input streams will be of
 * different resolution. The muxer will scale all the input frames to this
 * resolution. */
#define MUXER_OUTPUT_WIDTH 1920
#define MUXER_OUTPUT_HEIGHT 1080

/* Muxer batch formation timeout, for e.g. 40 millisec. Should ideally be set
 * based on the fastest source's framerate. */
#define MUXER_BATCH_TIMEOUT_USEC 4000000

gint frame_number = 0;
gchar pgie_classes_str[4][32] = { "Vehicle", "TwoWheeler", "Person",
  "Roadsign"
};

/* osd_sink_pad_buffer_probe  will extract metadata received on OSD sink pad
 * and update params for drawing rectangle, object information etc. */

static GstPadProbeReturn
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
    GstBuffer *buf = (GstBuffer *) info->data;
    guint num_rects = 0; 
    NvDsObjectMeta *obj_meta = NULL;
    guint vehicle_count = 0;
    guint person_count = 0;
    NvDsMetaList * l_frame = NULL;
    NvDsMetaList * l_obj = NULL;
    NvDsDisplayMeta *display_meta = NULL;

    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
        int offset = 0;
        for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
                l_obj = l_obj->next) {
            obj_meta = (NvDsObjectMeta *) (l_obj->data);
            if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) {
                vehicle_count++;
                num_rects++;
            }
            if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) {
                person_count++;
                num_rects++;
            }
        }
        display_meta = nvds_acquire_display_meta_from_pool(batch_meta);
        NvOSD_TextParams *txt_params  = &display_meta->text_params[0];
        display_meta->num_labels = 1;
        txt_params->display_text = g_malloc0 (MAX_DISPLAY_LEN);
        offset = snprintf(txt_params->display_text, MAX_DISPLAY_LEN, "Person = %d ", person_count);
        offset = snprintf(txt_params->display_text + offset , MAX_DISPLAY_LEN, "Vehicle = %d ", vehicle_count);

        /* Now set the offsets where the string should appear */
        txt_params->x_offset = 10;
        txt_params->y_offset = 12;

        /* Font , font-color and font-size */
        txt_params->font_params.font_name = "Serif";
        txt_params->font_params.font_size = 10;
        txt_params->font_params.font_color.red = 1.0;
        txt_params->font_params.font_color.green = 1.0;
        txt_params->font_params.font_color.blue = 1.0;
        txt_params->font_params.font_color.alpha = 1.0;

        /* Text background color */
        txt_params->set_bg_clr = 1;
        txt_params->text_bg_clr.red = 0.0;
        txt_params->text_bg_clr.green = 0.0;
        txt_params->text_bg_clr.blue = 0.0;
        txt_params->text_bg_clr.alpha = 1.0;

        nvds_add_display_meta_to_frame(frame_meta, display_meta);
    }

    g_print ("Frame Number = %d Number of objects = %d "
            "Vehicle Count = %d Person Count = %d\n",
            frame_number, num_rects, vehicle_count, person_count);
    frame_number++;
    return GST_PAD_PROBE_OK;
}

static gboolean
bus_call (GstBus * bus, GstMessage * msg, gpointer data)
{
  GMainLoop *loop = (GMainLoop *) data;
  switch (GST_MESSAGE_TYPE (msg)) {
    case GST_MESSAGE_EOS:
      g_print ("End of stream\n");
      g_main_loop_quit (loop);
      break;
    case GST_MESSAGE_ERROR:{
      gchar *debug;
      GError *error;
      gst_message_parse_error (msg, &error, &debug);
      g_printerr ("ERROR from element %s: %s\n",
          GST_OBJECT_NAME (msg->src), error->message);
      if (debug)
        g_printerr ("Error details: %s\n", debug);
      g_free (debug);
      g_error_free (error);
      g_main_loop_quit (loop);
      break;
    }
    default:
      break;
  }
  return TRUE;
}

int
main (int argc, char *argv[])
{
  GMainLoop *loop = NULL;
  GstElement *pipeline = NULL, *source = NULL, *h264parser = NULL,
      *decoder = NULL, *streammux = NULL, *sink = NULL, *pgie = NULL, *nvvidconv = NULL,
      *nvosd = NULL;
#ifdef PLATFORM_TEGRA
  GstElement *transform = NULL;
#endif
  GstBus *bus = NULL;
  guint bus_watch_id;
  GstPad *osd_sink_pad = NULL;

  /* Check input arguments */
  if (argc != 2) {
    g_printerr ("Usage: %s <H264 filename>\n", argv[0]);
    return -1;
  }

  /* Standard GStreamer initialization */
  gst_init (&argc, &argv);
  loop = g_main_loop_new (NULL, FALSE);

  /* Create gstreamer elements */
  /* Create Pipeline element that will form a connection of other elements */
  pipeline = gst_pipeline_new ("dstest1-pipeline");

  /* Source element for reading from the file */
  source = gst_element_factory_make ("filesrc", "file-source");

  /* Since the data format in the input file is elementary h264 stream,
   * we need a h264parser */
  h264parser = gst_element_factory_make ("h264parse", "h264-parser");

  /* Use nvdec_h264 for hardware accelerated decode on GPU */
  decoder = gst_element_factory_make ("nvv4l2decoder", "nvv4l2-decoder");

  /* Create nvstreammux instance to form batches from one or more sources. */
  streammux = gst_element_factory_make ("nvstreammux", "stream-muxer");

  if (!pipeline || !streammux) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }

  /* Use nvinfer to run inferencing on decoder's output,
   * behaviour of inferencing is set through config file */
  pgie = gst_element_factory_make ("nvinfer", "primary-nvinference-engine");

  /* Use convertor to convert from NV12 to RGBA as required by nvosd */
  nvvidconv = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter");

  /* Create OSD to draw on the converted RGBA buffer */
  nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");

  /* Finally render the osd output */
#ifdef PLATFORM_TEGRA
  transform = gst_element_factory_make ("nvegltransform", "nvegl-transform");
#endif
  sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");

  if (!source || !h264parser || !decoder || !pgie
      || !nvvidconv || !nvosd || !sink) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }

#ifdef PLATFORM_TEGRA
  if(!transform) {
    g_printerr ("One tegra element could not be created. Exiting.\n");
    return -1;
  }
#endif

  /* we set the input filename to the source element */
  g_object_set (G_OBJECT (source), "location", argv[1], NULL);

  g_object_set (G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
      MUXER_OUTPUT_HEIGHT, "batch-size", 1,
      "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);

  /* Set all the necessary properties of the nvinfer element,
   * the necessary ones are : */
  g_object_set (G_OBJECT (pgie),
      "config-file-path", "dstest1_pgie_config.txt", NULL);

  /* we add a message handler */
  bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
  bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
  gst_object_unref (bus);

  /* Set up the pipeline */
  /* we add all elements into the pipeline */
#ifdef PLATFORM_TEGRA
  gst_bin_add_many (GST_BIN (pipeline),
      source, h264parser, decoder, streammux, pgie,
      nvvidconv, nvosd, transform, sink, NULL);
#else
  gst_bin_add_many (GST_BIN (pipeline),
      source, h264parser, decoder, streammux, pgie,
      nvvidconv, nvosd, sink, NULL);
#endif

  GstPad *sinkpad, *srcpad;
  gchar pad_name_sink[16] = "sink_0";
  gchar pad_name_src[16] = "src";

  sinkpad = gst_element_get_request_pad (streammux, pad_name_sink);
  if (!sinkpad) {
    g_printerr ("Streammux request sink pad failed. Exiting.\n");
    return -1;
  }

  srcpad = gst_element_get_static_pad (decoder, pad_name_src);
  if (!srcpad) {
    g_printerr ("Decoder request src pad failed. Exiting.\n");
    return -1;
  }

  if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
      g_printerr ("Failed to link decoder to stream muxer. Exiting.\n");
      return -1;
  }

  gst_object_unref (sinkpad);
  gst_object_unref (srcpad);

  /* we link the elements together */
  /* file-source -> h264-parser -> nvh264-decoder ->
   * nvinfer -> nvvidconv -> nvosd -> video-renderer */

  if (!gst_element_link_many (source, h264parser, decoder, NULL)) {
    g_printerr ("Elements could not be linked: 1. Exiting.\n");
    return -1;
  }

#ifdef PLATFORM_TEGRA
  if (!gst_element_link_many (streammux, pgie,
      nvvidconv, nvosd, transform, sink, NULL)) {
    g_printerr ("Elements could not be linked: 2. Exiting.\n");
    return -1;
  }
#else
  if (!gst_element_link_many (streammux, pgie,
      nvvidconv, nvosd, sink, NULL)) {
    g_printerr ("Elements could not be linked: 2. Exiting.\n");
    return -1;
  }
#endif

  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the sink pad of the osd element, since by that time, the buffer would have
   * had got all the metadata. */
  osd_sink_pad = gst_element_get_static_pad (nvosd, "sink");
  if (!osd_sink_pad)
    g_print ("Unable to get sink pad\n");
  else
    gst_pad_add_probe (osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
        osd_sink_pad_buffer_probe, NULL, NULL);

  /* Set the pipeline to "playing" state */
  g_print ("Now playing: %s\n", argv[1]);
  gst_element_set_state (pipeline, GST_STATE_PLAYING);

  /* Wait till pipeline encounters an error or EOS */
  g_print ("Running...\n");
  g_main_loop_run (loop);

  /* Out of the main loop, clean up nicely */
  g_print ("Returned, stopping playback\n");
  gst_element_set_state (pipeline, GST_STATE_NULL);
  g_print ("Deleting pipeline\n");
  gst_object_unref (GST_OBJECT (pipeline));
  g_source_remove (bus_watch_id);
  g_main_loop_unref (loop);
  return 0;
}

Makefile

################################################################################
# Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

APP:= deepstream-test1-app

TARGET_DEVICE = $(shell gcc -dumpmachine | cut -f1 -d -)

NVDS_VERSION:=4.0

LIB_INSTALL_DIR?=/opt/nvidia/deepstream/deepstream-$(NVDS_VERSION)/lib/

ifeq ($(TARGET_DEVICE),aarch64)
  CFLAGS:= -DPLATFORM_TEGRA
endif

SRCS:= $(wildcard *.c)

INCS:= $(wildcard *.h)

PKGS:= gstreamer-1.0

OBJS:= $(SRCS:.c=.o)

CFLAGS+= -I./includes

CFLAGS+= `pkg-config --cflags $(PKGS)`

LIBS:= `pkg-config --libs $(PKGS)`

LIBS+= -L$(LIB_INSTALL_DIR) -lnvdsgst_meta -lnvds_meta \
       -Wl,-rpath,$(LIB_INSTALL_DIR)

all: $(APP)

%.o: %.c $(INCS) Makefile
	$(CC) -c -o $@ $(CFLAGS) $<

$(APP): $(OBJS) Makefile
	$(CC) -o $(APP) $(OBJS) $(LIBS)

clean:
	rm -rf $(OBJS) $(APP)

dstest1_pgie_config.txt

################################################################################
# Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   enable-dbscan(Default=false), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
model-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel
proto-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.prototxt
labelfile-path=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/cal_trt.bin
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

[class-attrs-all]
threshold=0.2
eps=0.2
group-threshold=1

四、其他的例程源碼簡介

參考官方文檔“Reference Application Source Details”部分:https://docs.nvidia.com/metropolis/deepstream/dev-guide/
也可以參考博客《關於NVIDIA Deepstream SDK壓箱底的資料都在這裏了》https://cloud.tencent.com/developer/article/1524712

Sample APP包括
(注:其中都需要make編譯,具體請參閱其中的README文檔)

  • DeepStream Sample App
    <DS installation dir>/sources/apps/sample_apps/deepstream-app
    說明:端到端示例演示了4級聯神經網絡(1個一級檢測器和3個二級分類器)的多相機流,並顯示平鋪輸出。
  • DeepStream Test 1
    <DS installation dir>/sources/apps/sample_apps/deepstream-test1
    說明:對單一H264視頻流,應用filesrc, decode, nvstreammmux, nvinfer(主檢測網絡), nvosd, renderer等DeepStream插件(元素)
  • DeepStream Test 2
    <DS installation dir>/sources/apps/sample_apps/deepstream-test2
    說明:簡單的應用程序,建立在test1之上,顯示額外的屬性,如跟蹤和二級分類屬性。
  • DeepStream Test 3
    <DS installation dir>/sources/apps/sample_apps/deepstream-test3
    說明:簡單的應用程序,建立在test1的基礎上,顯示多個輸入源和批處理使用nvstreammuxer。
  • DeepStream Test 4
    <DS installation dir>/sources/apps/sample_apps/deepstream-test4
    說明:這是在Test1示例的基礎上構建的,演示了“nvmsgconv”和“nvmsgbroker”插件在物聯網連接管道中的使用。對於test4,用戶必須修改kafka代理連接字符串才能成功連接。需要安裝分析服務器docker之前運行test4。DeepStream分析文檔有關於設置分析服務器的更多信息。
  • FasterRCNN Object Detector
    <DS installation dir>/sources/objectDetector_FasterRCNN
    說明:FasterRCNN物體探測器實例。
  • SSD Object Detector
    <DS installation dir>/sources/objectDetector_SSD
    說明:SSD目標探測器實例。

五、改寫例程DeepStream-Test1-app

由於該sample無法調用攝像頭,並且幾種嘗試都失敗,就自己學習了GStreamer 《GStreamer應用開發手冊學習筆記》,並移植該sample推理部分到代碼中。
一來,能夠使自己加深理解GStreamer處理邏輯;二來,也能明白deepstream插件相關的一些信息。
(注:該demo使用GStreamer調用rtsp網絡攝像頭並使用deepstream插件推理,達到實時目標檢測效果,因爲內部模型未調整,所以檢測功能和官方DeepStream-Test1-app一樣。)
ps:

問:deepstream的下列插件是開源的嗎?哪裏可以查到插件源碼以及資料呢?
"nvv4l2decoder"、"nvstreammux"、"nvinfer"、"nvvideoconvert"、"nvdsosd"、"nvegltransform"、"nveglglessink"
回:在source裏找找代碼。只有很小一部分開源,其他都沒有。
插件就類似積木,你按照自己的應用流程去利用這些插件,當然你也可以自己寫插件。
但是你如果想看底層代碼是怎麼寫的,應該看不到吧。
這個deepstream就是快速幫你做出你的應用。

deepstream_test1_app_demo_rtsp.c

//
// Created by toson on 20-2-10.
//

#include "stdio.h"
#include "gst/gst.h"

#define RTSPCAM "rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=0"
#define MUXER_OUTPUT_WIDTH 1920
#define MUXER_OUTPUT_HEIGHT 1080
#define MUXER_BATCH_TIMEOUT_USEC 4000000


static void cb_new_rtspsrc_pad(GstElement *element, GstPad *pad, gpointer data) {
    gchar *name;
    GstCaps *p_caps;
    gchar *description;
    GstElement *p_rtph264depay;

    name = gst_pad_get_name(pad);
    g_print("A new pad %s was created\n", name);

    // here, you would setup a new pad link for the newly created pad
    // sooo, now find that rtph264depay is needed and link them?
    p_caps = gst_pad_get_pad_template_caps(pad);

    description = gst_caps_to_string(p_caps);
    printf("%s\n", p_caps);
    printf("%s\n", description);
    g_free(description);

    p_rtph264depay = GST_ELEMENT(data);

    // try to link the pads then ...
    if (!gst_element_link_pads(element, name, p_rtph264depay, "sink")) {
        printf("Failed to link elements 3\n");
    }

    g_free(name);
}


int main(int argc, char *argv[]) {
    GstElement *pipeline = NULL, *source = NULL, *rtppay = NULL, *parse = NULL,
            *decoder = NULL, *sink = NULL, *filter1 = NULL;
    GstCaps *filtercaps = NULL;

    gst_init(&argc, &argv);

    /// Build Pipeline
    pipeline = gst_pipeline_new("Toson");

    /// Create elements
    source = gst_element_factory_make("rtspsrc", "source");
    g_object_set(G_OBJECT (source), "latency", 2000, NULL);
    rtppay = gst_element_factory_make("rtph264depay", "depayl");
    parse = gst_element_factory_make("h264parse", "parse");
#ifdef PLATFORM_TEGRA
    decoder = gst_element_factory_make("nvv4l2decoder", "nvv4l2-decoder");
    GstElement *streammux = gst_element_factory_make("nvstreammux", "stream-muxer");
    GstElement *pgie = gst_element_factory_make("nvinfer", "primary-nvinference-engine");
    GstElement *nvvidconv = gst_element_factory_make("nvvideoconvert", "nvvideo-converter");
    GstElement *nvosd = gst_element_factory_make("nvdsosd", "nv-onscreendisplay");
    GstElement *transform = gst_element_factory_make("nvegltransform", "nvegl-transform");
    sink = gst_element_factory_make ( "nveglglessink", "sink");
    if (!pipeline || !streammux || !pgie || !nvvidconv || !nvosd || !transform) {
        g_printerr("One element could not be created. Exiting.\n");
        return -1;
    }
    g_object_set(G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
                 MUXER_OUTPUT_HEIGHT, "batch-size", 1,
                 "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);
    g_object_set(G_OBJECT (pgie),
                 "config-file-path", "dstest1_pgie_config.txt", NULL);
#else
    decoder = gst_element_factory_make("avdec_h264", "decode");
    sink = gst_element_factory_make("autovideosink", "sink");
#endif
    if (!pipeline || !source || !rtppay || !parse || !decoder || !sink) {
        g_printerr("One element could not be created.\n");
    }
    g_object_set(G_OBJECT (sink), "sync", FALSE, NULL);
    g_object_set(GST_OBJECT(source), "location", RTSPCAM, NULL);

    /// 加入插件
#ifdef PLATFORM_TEGRA
    gst_bin_add_many(GST_BIN (pipeline),
                     source, rtppay, parse, decoder, streammux, pgie,
                     nvvidconv, nvosd, transform, sink, NULL);
#else
    gst_bin_add_many(GST_BIN (pipeline),
                     source, rtppay, parse, decoder, sink, NULL);
#endif
    // listen for newly created pads
    g_signal_connect(source, "pad-added", G_CALLBACK(cb_new_rtspsrc_pad), rtppay);

#ifdef PLATFORM_TEGRA
    GstPad *sinkpad, *srcpad;
    gchar pad_name_sink[16] = "sink_0";
    gchar pad_name_src[16] = "src";

    sinkpad = gst_element_get_request_pad(streammux, pad_name_sink);
    if (!sinkpad) {
        g_printerr("Streammux request sink pad failed. Exiting.\n");
        return -1;
    }
    //獲取指定element中的指定pad  該element爲 streammux
    srcpad = gst_element_get_static_pad(decoder, pad_name_src);
    if (!srcpad) {
        g_printerr("Decoder request src pad failed. Exiting.\n");
        return -1;
    }

    if (gst_pad_link(srcpad, sinkpad) != GST_PAD_LINK_OK) {
        g_printerr("Failed to link decoder to stream muxer. Exiting.\n");
        return -1;
    }
    //gst_pad_link
    gst_object_unref(sinkpad);
    gst_object_unref(srcpad);
#endif

    /// 鏈接插件
#ifdef PLATFORM_TEGRA
    if (!gst_element_link_many(rtppay, parse, decoder, NULL)) {
        printf("\nFailed to link elements 0.\n");
        return -1;
    }
    if (!gst_element_link_many(streammux, pgie, nvvidconv, nvosd, transform, sink, NULL)) {
        printf("\nFailed to link elements 2.\n");
        return -1;
    }
#else
    if (!gst_element_link_many(rtppay, parse, decoder, sink, NULL)) {
        printf("\nFailed to link elements.\n");
        return -1;
    }
#endif

    /// 開始運行
    gst_element_set_state(pipeline, GST_STATE_PLAYING);

    GstBus *bus = gst_element_get_bus(pipeline);
    GstMessage *msg = gst_bus_timed_pop_filtered(bus, GST_CLOCK_TIME_NONE,
                                                 (GstMessageType) (GST_MESSAGE_ERROR | GST_MESSAGE_EOS));

    if (msg != NULL) {
        gst_message_unref(msg);
    }
    gst_object_unref(bus);

    gst_element_set_state(pipeline, GST_STATE_NULL);
    gst_object_unref(pipeline);

}

當然,運行時你仍然需要配置文件:dstest1_pgie_config.txt
如果你需要CMake編譯:

CMakeLists.txt

cmake_minimum_required(VERSION 3.5)
project(deepstream_test1-app_toson)

set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED TRUE)
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -pthread -g")


MESSAGE(STATUS "operation system is ${CMAKE_SYSTEM}")
MESSAGE(STATUS "CMAKE_SYSTEM_NAME is ${CMAKE_SYSTEM}")
IF (CMAKE_SYSTEM_NAME MATCHES "Linux")
    MESSAGE(STATUS "current platform: Linux ")
ELSEIF (CMAKE_SYSTEM_NAME MATCHES "Windows")
    MESSAGE(STATUS "current platform: Windows")
ELSEIF (CMAKE_SYSTEM_NAME MATCHES "FreeBSD")
    MESSAGE(STATUS "current platform: FreeBSD")
ELSE ()
    MESSAGE(STATUS "other platform: ${CMAKE_SYSTEM_NAME}")
ENDIF (CMAKE_SYSTEM_NAME MATCHES "Linux")


if (${CMAKE_SYSTEM} MATCHES "Linux-4.9.140-tegra")
    message("On TEGRA PLATFORM.")
    add_definitions(-DPLATFORM_TEGRA)
    set(SYS_USR_LIB /usr/lib/aarch64-linux-gnu)
    set(SYS_LIB /lib/aarch64-linux-gnu)
    set(DS_LIB /opt/nvidia/deepstream/deepstream-4.0/lib)
    link_libraries(
            /opt/nvidia/deepstream/deepstream-4.0/lib/libnvdsgst_meta.so
            /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_meta.so
    )
else ()
    message("On X86 PLATFORM.")
    set(SYS_USR_LIB /usr/lib/x86_64-linux-gnu)
    set(SYS_LIB /lib/x86_64-linux-gnu)
endif ()

include_directories(
        includes
        /usr/include/gstreamer-1.0
        /usr/include/glib-2.0
        ${SYS_USR_LIB}/glib-2.0/include
)
link_libraries(
        ${SYS_USR_LIB}/libgtk3-nocsd.so.0
        ${SYS_USR_LIB}/libgstreamer-1.0.so.0
        ${SYS_USR_LIB}/libgobject-2.0.so.0
        ${SYS_USR_LIB}/libglib-2.0.so.0
        ${SYS_LIB}/libc.so.6
        ${SYS_LIB}/libdl.so.2
        ${SYS_LIB}/libpthread.so.0
        ${SYS_USR_LIB}/libgmodule-2.0.so.0
        ${SYS_LIB}/libm.so.6
        ${SYS_USR_LIB}/libffi.so.6
        ${SYS_LIB}/libpcre.so.3
)

add_executable(deepstream_test1_app_ deepstream_test1_app.c)
add_executable(deepstream_test1_app_demo_rtsp_ deepstream_test1_app_demo_rtsp.c)
發佈了26 篇原創文章 · 獲贊 13 · 訪問量 2萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章