================20191212追加,以下方式從內存中讀取h264幀的實現方式,有冗餘操作,改進說明-《live555-從buffer讀取h264推流》===========================================
(2019-03-25 糾正下,下面所有h264幀概念,其實是h264slice, 並非一幀圖像,h264中每一個slice可以單獨解碼,所以有時候會看到 花屏,因爲一幀數據被分成很多單獨的slice,如果網絡幀丟了部分slice,則一幀圖像會出現只解碼一部分的情況)
現有本地已知的h264數據幀緩存,想對接到live555 使用rtsp發送。
live555源碼中是支持直接讀h264文件的demo, 想改爲從自己的dev獲取數據,官方也有相關FAQ:http://www.live555.com/liveMedia/faq.html :3. The "test*Streamer" test programs read from a file. Can I modify them so that they take input from a H.264, H.265 or MPEG encoder instead, so I can stream live (rather than prerecorded) video and/or audio?
原意還是自行查看原語,有一句:(Even simpler, if your operating system represents the encoder device as a file, then you can just use the name of this file (instead of "test.*")
先貼上最終成果:(具體分析流程後續貼上)
先說明:(以下方式對接,修改是比較少的,但是效率可是有折扣的,數據在傳到發送模塊前先在framepaser中走了一圈,而這個framepaser中很大的一部分工作在於從這一串流數據中解析出每一幀的起始和結束,當然也有sps,pps等信息,而我們已知的本地緩衝區已經是知道幀開頭結束的,這一部分的工作重複,還多了framepaser內部使用的內存空間 ,要解決這些問題,那就需要自行實現H264VideoStreamFramer這一層了)
參照testOnDemandRTSPServer.cpp, 在其中添加自己的一個模塊(儘量保持原代碼的完整性,建議所有的修改另行添加一個目錄)。
main函數添加自己的模塊:
#if 1
// A H.264 live elementary stream:
{//這裏是自己的ServerMediaSubssion
//這個是輸出緩衝區的大小,要設置到比任意一幀h264 大
OutPacketBuffer::maxSize = 2000000;
char const* streamName = "h264Live";
char const* inputFileName = "live";
ServerMediaSession* sms
= ServerMediaSession::createNew(*env, streamName, streamName,
descriptionString);
sms->addSubsession(H264LiveVideoServerMediaSubssion
::createNew(*env,reuseFirstSource));
rtspServer->addServerMediaSession(sms);
announceStream(rtspServer, sms, streamName, inputFileName);
}
#endif
#if 1
// H.264原demo對比
// A H.264 video elementary stream:
{
char const* streamName = "h264ESVideoTest";
char const* inputFileName = "test.264";
ServerMediaSession* sms
= ServerMediaSession::createNew(*env, streamName, streamName,
descriptionString);
sms->addSubsession(H264VideoFileServerMediaSubsession
::createNew(*env, inputFileName, reuseFirstSource));
rtspServer->addServerMediaSession(sms);
announceStream(rtspServer, sms, streamName, inputFileName);
}
#endif
下面要實現自己的
class H264LiveVideoServerMediaSubssion
H264FramedLiveSource.hh
#ifndef _H264_LIVE_VIDEO_SERVER_MEDIA_SUBSESSION_HH
#define _H264_LIVE_VIDEO_SERVER_MEDIA_SUBSESSION_HH
#include "OnDemandServerMediaSubsession.hh"
class H264LiveVideoServerMediaSubssion : public OnDemandServerMediaSubsession
{
public:
static H264LiveVideoServerMediaSubssion* createNew(UsageEnvironment& env, Boolean reuseFirstSource);
protected:
H264LiveVideoServerMediaSubssion(UsageEnvironment& env,Boolean reuseFirstSource);
~H264LiveVideoServerMediaSubssion();
protected: // redefined virtual functions
FramedSource* createNewStreamSource(unsigned clientSessionId,unsigned& estBitrate);
RTPSink* createNewRTPSink(Groupsock* rtpGroupsock,
unsigned char rtpPayloadTypeIfDynamic,
FramedSource* inputSource);
public:
};
#endif
H264FramedLiveSource.cpp
#include "H264LiveVideoServerMediaSubssion.hh"
#include "H264FramedLiveSource.hh"
#include "H264VideoStreamFramer.hh"
#include "H264VideoRTPSink.hh"
H264LiveVideoServerMediaSubssion* H264LiveVideoServerMediaSubssion::createNew(UsageEnvironment& env, Boolean reuseFirstSource)
{
return new H264LiveVideoServerMediaSubssion(env, reuseFirstSource);
}
H264LiveVideoServerMediaSubssion::H264LiveVideoServerMediaSubssion(UsageEnvironment& env,Boolean reuseFirstSource)
: OnDemandServerMediaSubsession(env,reuseFirstSource)
{
}
H264LiveVideoServerMediaSubssion::~H264LiveVideoServerMediaSubssion()
{
}
FramedSource* H264LiveVideoServerMediaSubssion::createNewStreamSource(unsigned clientSessionId, unsigned& estBitrate)
{
//創建視頻源,參照H264VideoFileServerMediaSubsession
H264FramedLiveSource* liveSource = H264FramedLiveSource::createNew(envir());
if (liveSource == NULL)
{
return NULL;
}
// Create a framer for the Video Elementary Stream:
return H264VideoStreamFramer::createNew(envir(), liveSource);
}
RTPSink* H264LiveVideoServerMediaSubssion
::createNewRTPSink(Groupsock* rtpGroupsock,
unsigned char rtpPayloadTypeIfDynamic,
FramedSource* /*inputSource*/) {
return H264VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic);
}
下面是關鍵,創建H264FramedLiveSource,自己的輸入源。也是對比H264VideoFileServerMediaSubsession
裏面用的ByteStreamFileSource 來分析。
H264FramedLiveSource
class H264FramedLiveSource : public FramedSource
{
public:
static H264FramedLiveSource* createNew(UsageEnvironment& env);
// redefined virtual functions
virtual unsigned maxFrameSize() const;
protected:
H264FramedLiveSource(UsageEnvironment& env);
virtual ~H264FramedLiveSource();
private:
virtual void doGetNextFrame();
protected:
static TestFromFile * pTest;
};
H264FramedLiveSource.cpp
H264FramedLiveSource::H264FramedLiveSource(UsageEnvironment& env)
: FramedSource(env)
{
}
H264FramedLiveSource* H264FramedLiveSource::createNew(UsageEnvironment& env)
{
H264FramedLiveSource* newSource = new H264FramedLiveSource(env);
return newSource;
}
H264FramedLiveSource::~H264FramedLiveSource()
{
}
unsigned H264FramedLiveSource::maxFrameSize() const
{
//printf("wangmaxframesize %d %s\n",__LINE__,__FUNCTION__);
//這裏返回本地h264幀數據的最大長度
return 1024*120;
}
void H264FramedLiveSource::doGetNextFrame()
{
//這裏讀取本地的幀數據,就是一個memcpy(fTo,XX,fMaxSize),要確保你的數據不丟失,即fMaxSize要大於等於本地幀緩存的大小,關鍵在於上面的maxFrameSize() 虛函數的實現
fFrameSize = XXXgeth264Frame(fTo, fMaxSize);
printf("read dat befor %d %s fMaxSize %d ,fFrameSize %d \n",__LINE__,__FUNCTION__,fMaxSize,fFrameSize);
if (fFrameSize == 0) {
handleClosure();
return;
}
//設置時間戳
gettimeofday(&fPresentationTime, NULL);
// Inform the reader that he has data:
// To avoid possible infinite recursion, we need to return to the event loop to do this:
nextTask() = envir().taskScheduler().scheduleDelayedTask(0,
(TaskFunc*)FramedSource::afterGetting, this);
}
return;
}
以上,關鍵在於doGetNextFrame 虛函數(用java裏面的 接口 稱謂,感覺更爲妥當)的實現,明確該接口必須要做哪些工作:
參照ByteStreamFileSource類中的實現,可以列出這個虛函數需要做的工作:
1.0 讀數據到 fto
2.0 實際讀到的數據個數設置到 fFrameSize
3.0 設置時間戳到 fPresentationTime
4.0 讀完數據,通知調用者 Inform the reader that he has data
不過有一點問題,你會發現 讀數據時用到的 表示能讀多少數據的輸入參數 fMaxSize 是不得而知的,這個就只能去查看調用者怎麼設置的了,就是要明確它的調用邏輯,分析過程後續補充。 要控制這個參數,實現上面的H264FramedLiveSource::maxFrameSize(),返回一個幀的最大長度。這個虛函數在ByteStreamFileSource是沒有覆蓋實現的,ByteStreamFileSource 中是一個文件流,上面想讀多少數據,就能直接中這個文件中讀多少數據,下次還可以從上一次的讀取位置接着讀取,回想官方的FAQ:if your operating system represents the encoder device as a file, 而我這個實現的是,我有一幀數據在緩衝區,要求調用者一次性讀走,不然下次可就指不定去哪裏了,整個要實現的功能就是這樣。
(其實這樣對接,修改是比較少的,但是效率可是有折扣的,數據在傳到發送模塊前先在framepaser中走了一圈,而這個framepaser中很大的一部分工作在於從這一串流數據中解析出每一幀的起始和結束,當然也有sps,pps等信息,而我們已知的本地緩衝區已經是知道幀開頭結束的,這一部分的工作重複,還多了framepaser內部使用的內存空間 ,要解決這些問題,那就需要自行實現H264VideoStreamFramer這一層了) 補充:這一部分工作live555源碼中已經支持了,參考 https://blog.csdn.net/u012459903/article/details/103507877
圖解:https://blog.csdn.net/u012459903/article/details/86597475 圖四