1. kafka的安裝與配置
2. .sh文件編寫
開啓kafka服務要開至少三個終端,我寫了一個.sh文件進行批處理,且將這些放在一個終端的多個標籤下。參考deepstream開發小結第3節
3. 發送數據改寫
本文繼上篇文章deepstream開發小結進行了補充,重點介紹發送數據的改寫。
首先參考DeepStream配置文件 修改msg-conv-payload-type和 msg-conv-msg2p-lib(好吧,我懶了,感覺看英文更直觀準確,後面就沒翻譯了)
其中msg-conv-payload-type=0表示發送最全的信息,即每個object一個信息。msg-conv-payload-type=1表示發送mini版信息,即一幀一個信息,包含所有objects。msg-conv-payload-type=256是自定義的信息,需要用戶自己實現nvds_msg2p_*接口。參考官方論壇how to customize the json message sent to kafka server?
其中mini類似下面格式(其他格式可以自己跑一下test4和test5,這裏不貼圖了)
{
"version" : "4.0",
"id" : 33,
"#timestamp" : "2020-04-09T21:20:48.851Z",
"sensorId" : "sensor1",
"objects" : [
"68|429|472|465|580|person",
"12|462|463|516|598|person",
"13|258|466|414|832|person",
"69|57|457|195|785|person"
]
}
本人是更改了/home/username/Downloads/deepstream_sdk_v4.0.2_x86_64/sources/libs/nvmsgconv目錄下的nvmsgconv.cpp文件mini函數。源碼見下:
static gchar *
generate_deepstream_message_minimal(NvDsMsg2pCtx *ctx, NvDsEvent *events, guint size)
{
/*
The JSON structure of the frame
{
"version": "4.0",
"id": "frame-id",
"@timestamp": "2018-04-11T04:59:59.828Z",
"sensorId": "sensor-id",
"objects": [
".......object-1 attributes...........",
".......object-2 attributes...........",
".......object-3 attributes..........."
]
}
*/
/*
An example object with Vehicle object-type
{
"version": "4.0",
"id": "frame-id",
"@timestamp": "2018-04-11T04:59:59.828Z",
"sensorId": "sensor-id",
"objects": [
"957|1834|150|1918|215|Vehicle|#|sedan|Bugatti|M|blue|CA 444|California|0.8",
"..........."
]
}
*/
JsonNode *rootNode;
JsonObject *jobject;
JsonArray *jArray;
guint i;
stringstream ss;
gchar *message = NULL;
jArray = json_array_new();
for (i = 0; i < size; i++)
{
ss.str("");
ss.clear();
NvDsEventMsgMeta *meta = events[i].metadata;
ss << meta->trackingId << "|" << meta->bbox.left << "|" << meta->bbox.top
<< "|" << meta->bbox.left + meta->bbox.width << "|" << meta->bbox.top + meta->bbox.height
<< "|" << object_enum_to_str(meta->objType, meta->objectId);
if (meta->extMsg && meta->extMsgSize)
{
// Attach secondary inference attributes.
switch (meta->objType)
{
case NVDS_OBJECT_TYPE_VEHICLE:
{
NvDsVehicleObject *dsObj = (NvDsVehicleObject *)meta->extMsg;
if (dsObj)
{
ss << "|#|" << to_str(dsObj->type) << "|" << to_str(dsObj->make) << "|"
<< to_str(dsObj->model) << "|" << to_str(dsObj->color) << "|" << to_str(dsObj->license)
<< "|" << to_str(dsObj->region) << "|" << meta->confidence;
}
}
break;
case NVDS_OBJECT_TYPE_PERSON:
{
NvDsPersonObject *dsObj = (NvDsPersonObject *)meta->extMsg;
if (dsObj)
{
ss << "|#|" << to_str(dsObj->gender) << "|" << dsObj->age << "|"
<< to_str(dsObj->hair) << "|" << to_str(dsObj->cap) << "|" << to_str(dsObj->apparel)
<< "|" << meta->confidence;
}
}
break;
case NVDS_OBJECT_TYPE_FACE:
{
NvDsFaceObject *dsObj = (NvDsFaceObject *)meta->extMsg;
if (dsObj)
{
ss << "|#|" << to_str(dsObj->gender) << "|" << dsObj->age << "|"
<< to_str(dsObj->hair) << "|" << to_str(dsObj->cap) << "|" << to_str(dsObj->glasses)
<< "|" << to_str(dsObj->facialhair) << "|" << to_str(dsObj->name) << "|"
<< "|" << to_str(dsObj->eyecolor) << "|" << meta->confidence;
}
}
break;
default:
cout << "Object type (" << meta->objType << ") not implemented" << endl;
break;
}
}
json_array_add_string_element(jArray, ss.str().c_str());
}
// It is assumed that all events / objects are associated with same frame.
// Therefore ts / sensorId / frameId of first object can be used.
jobject = json_object_new();
json_object_set_string_member(jobject, "version", "4.0");
json_object_set_int_member(jobject, "id", events[0].metadata->frameId);
json_object_set_string_member(jobject, "#timestamp", events[0].metadata->ts);
if (events[0].metadata->sensorStr)
{
json_object_set_string_member(jobject, "sensorId", events[0].metadata->sensorStr);
}
else if (ctx->privData)
{
json_object_set_string_member(jobject, "sensorId",
to_str((gchar *)sensor_id_to_str(ctx, events[0].metadata->sensorId)));
}
else
{
json_object_set_string_member(jobject, "sensorId", "0");
}
json_object_set_array_member(jobject, "objects", jArray);
rootNode = json_node_new(JSON_NODE_OBJECT);
json_node_set_object(rootNode, jobject);
message = json_to_string(rootNode, TRUE);
json_node_free(rootNode);
json_object_unref(jobject);
return message;
}
利用類似json_object_set_array_member()函數(不一定是array,也可以是string或int)可以改寫發送json數據的格式。代碼理解比較簡單,改寫也很簡單,就不贅述了。
4. 補充
這裏補充一些對於源碼的理解。
NvDsMsg2pCtx *ctx是配置文件, NvDsEvent *events是包含了NvDsEventMsgMeta的數組 ,guint size指一幀中object數量,幾個object就在主文件中存幾次eventmeta。
typedef struct NvDsEventMsgMeta
{
/** type of event */
NvDsEventType type;
/** type of object */
NvDsObjectType objType;
/** bounding box of object */
NvDsRect bbox;
/** Geo-location of object */
NvDsGeoLocation location;
/** coordinate of object */
NvDsCoordinate coordinate;
/** signature of object */
NvDsObjectSignature objSignature;
/** class id of object */
gint objClassId;
/** id of sensor that generated the event */
gint sensorId;
/** id of analytics module that generated the event */
gint moduleId;
/** id of place related to the object */
gint placeId;
/** id of component that generated this event */
gint componentId;
/** video frame id of this event */
gint frameId;
/** confidence of inference */
gdouble confidence;
/** tracking id of object */
gint trackingId;
/** time stamp of generated event */
gchar *ts;
/** label of detected / inferred object */
gchar *objectId;
/** Identity string of sensor */
gchar *sensorStr;
/** other attributes associated with the object */
gchar *otherAttrs;
/** name of video file */
gchar *videoPath;
/**
* To extend the event message meta data.
* This can be used for custom values that can't be accommodated
* in the existing fields OR if object(vehicle, person, face etc.) specific
* values needs to be attached.
*/
gpointer extMsg;
/** size of custom object */
guint extMsgSize;
} NvDsEventMsgMeta;
還有一個坑,就是char *必須用strdup()進行拷貝,如果是string轉char *,不能直接str.c_str(),需要strdup(str.c_str())。否則你會發現要麼亂碼,要麼就是設成常量或靜態,字符串一幀中不會改變。
如有其他問題可以評論,我再補充。總的來說kafka這裏比較簡單。