FFmpeg3.3.2+SDL2實現流媒體音頻播放

 

 

我的視頻課程(基礎):《(NDK)FFmpeg打造Android萬能音頻播放器》

我的視頻課程(進階):《(NDK)FFmpeg打造Android視頻播放器》

我的視頻課程(編碼直播推流):《Android視頻編碼和直播推流》

我的視頻課程(C++ OpenGL):《Android C++ OpenGL教程》

 

 

前言:

 

由於我比較喜歡聽廣播(中國之聲的“千里共良宵”),這個節目也是滿滿的雞湯,不過確實能觸碰到我們夜行俠的內心深處,併產生共鳴,大家也可以去聽聽試試。不過最好用自己製作的播放器,那感覺倍感親切,固現在把我製作的過程拋磚引玉一番。

 

正文:

如果覺得本文有點突兀,可以先看這三篇博客,然後再看這篇就容易了:

1、FFmpeg(3.3.2)移植Android平臺

2、Android Studio通過cmake創建FFmpeg項目

3、Android Studio用cmake編譯SDL2

 

好了話不多說,還是先上圖:

此項目用到了我的另外的開源項目:

1、RxjavaRetrofit訪問網絡

2、AdViewPager輪播圖

3、RecyclerViewHeaderAndFooter

圖1、廣播列表

圖2、播放頁面

圖3、後臺播放,狀態欄顯示

怎麼樣,還是有點樣子吧(UI是仿的蜻蜓FM的,數據是抓的中國廣播CRN的數據)。不過這次不是講這個APP得製作,因爲這只是外表的假象而已,我們要講的是它的內在,播放器的封裝,下面纔是我們製作和調試的UI:

 

這篇博客是基於Eclipse開發的,如果看了我的另外AS移植FFmpeg和SDL2博客的話,相信移植到AS中完全沒問題的,那我們開始吧。

一、首先講講流程和所需要的一些知識點,文末會給出一些好的學習鏈接

1、FFmpeg的使用流程

1)註冊解碼器和初始化網絡

        av_register_all();

        avformat_network_init();

2)聲明解碼器上下文

        AVFormatContext

3)打開文件

        avformat_open_input

4)找到文件中的視音頻字幕流

        avformat_find_stream_info

5)然後循環取出我們需要的某一個流文件(音頻流可能有多個,視頻流,字幕流等)僞代碼如下

 

for (; i < pFormatCtx->nb_streams; i++)
	{
		if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
		{
			playerState->audioStreamIndex = i;
			break;
		}
	}

6)然後找到需要解碼的流的解碼器上下文,僞代碼如下

 

 

pFormatCtx->streams[playerState->audioStreamIndex]->codec;

7)根據解碼器上下文找到解碼器

 

        avcodec_find_decoder


8)打開解碼器

        avcodec_open2

9)然後循環讀取流中的數據到avpacket結構中,並進行相應的轉碼,加特效等操作後播放或繪製

      for(av_read_frame){...}

 

2、SDL的使用流程

1)初始化所需要的功能(SDL_SetMainReady主函數已在SDL_android_main.c中初始化了的)

        SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)

2)設置採樣率和回調函數

 

	playerState->wanted_spec.freq = audioCodecCtx->sample_rate;
	playerState->wanted_spec.format = AUDIO_S16SYS;
	playerState->wanted_spec.channels = 2;
	playerState->wanted_spec.silence = 0;
	playerState->wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE;
	playerState->wanted_spec.callback = audio_callback;
	playerState->wanted_spec.userdata = playerState->audioCodecCtx;

 

3)打開音頻播放設備並設置成播放狀態

        SDL_OpenAudio

        SDL_PauseAudio(0)//0播放 其它暫停

3、還需要了解c語言隊列,線程等知識。

其中FFmpeg負責解碼初音頻或者視頻等,然後把音頻或者視頻給SDL來播放和顯示,再實現音視頻同步,就能實現正常播放了。

二、改造SDL2默認的SDLActivity,提取出相關的方法和本地方法到單獨的文件裏面,就不用集成Activity了,這裏提出了播放器控制類(WlPlayer.java)和surface類(SDLSurface.java)

 

 

WlPlayer.java代碼(裏面加了我自己的相關的播放控制方法和回調):

 

 

package com.ywl5320.wlsdk.player;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;

import com.ywl5320.wlsdk.player.SDLSurface.OnSurfacePrepard;

import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaRecorder;
import android.os.Build;
import android.util.Log;
import android.view.InputDevice;
import android.view.MotionEvent;
import android.view.Surface;
import android.view.View;

public class WlPlayer {
	
	private static OnPlayerPrepard mOnPlayerPrepard;
	private static OnPlayerInfoListener onPlayerInfoListener;
	private static OnErrorListener onErrorListener;
	private static OnCompleteListener onCompleteListener;
	
	private static String url;

	public static Activity mSingleton;

	// Keep track of the paused state
	public static boolean mIsPaused, mIsSurfaceReady, mHasFocus;
	public static boolean mExitCalledFromJava;

	// This is what SDL runs in. It invokes SDL_main(), eventually
	protected static Thread mSDLThread;

	// Audio
	protected static AudioTrack mAudioTrack;
	protected static AudioRecord mAudioRecord;

	/**
	 * If shared libraries (e.g. SDL or the native application) could not be
	 * loaded.
	 */
	public static boolean mBrokenLibraries;

	// If we want to separate mouse and touch events.
	// This is only toggled in native code when a hint is set!
	public static boolean mSeparateMouseAndTouch;

	public static SDLSurface mSurface;
	protected static SDLJoystickHandler mJoystickHandler;

	public static void initPlayer(Activity activity)
	{
		loadLibraries();
		initialize();
		mSingleton = activity;
		
		if (Build.VERSION.SDK_INT >= 12) {
			mJoystickHandler = new SDLJoystickHandler_API12();
		} else {
			mJoystickHandler = new SDLJoystickHandler();
		}
	}

	/**
	 * This method is called by SDL before loading the native shared libraries.
	 * It can be overridden to provide names of shared libraries to be loaded.
	 * The default implementation returns the defaults. It never returns null.
	 * An array returned by a new implementation must at least contain "SDL2".
	 * Also keep in mind that the order the libraries are loaded may matter.
	 * 
	 * @return names of shared libraries to be loaded (e.g. "SDL2", "main").
	 */
	protected static String[] getLibraries() {
		return new String[] { 
				"SDL2", 
				"avutil-55",
	            "swresample-2",
	            "avcodec-57",
	            "avformat-57", 
	            "swscale-4",
	            "postproc-54",
	            "avfilter-6",
	            "avdevice-57",
				"wlPlayer" };
	}

	// Load the .so
	public static void loadLibraries() {
		for (String lib : getLibraries()) {
			System.loadLibrary(lib);
		}
	}

	public static void initialize() {
		// The static nature of the singleton and Android quirkyness force us to
		// initialize everything here
		// Otherwise, when exiting the app and returning to it, these variables
		// *keep* their pre exit values
		mSingleton = null;
		mSurface = null;
		// mTextEdit = null;
		// mLayout = null;
		mJoystickHandler = null;
		mSDLThread = null;
		mAudioTrack = null;
		mAudioRecord = null;
		mExitCalledFromJava = false;
		mBrokenLibraries = false;
		mIsPaused = false;
		mIsSurfaceReady = false;
		mHasFocus = true;
	}
	
	public static void initSurface(SDLSurface surface)
	{
		mSurface = surface;
	}

	public static void setPrepardListener(OnPlayerPrepard onPlayerPrepard) {
		mOnPlayerPrepard = onPlayerPrepard;
	}
	
	public static void setOnErrorListener(OnErrorListener error)
	{
		onErrorListener = error;
	}
	
	
	public static void setOnCompleteListener(OnCompleteListener onComplete)
	{
		onCompleteListener = onComplete;
	}
	
	public static void setDataSource(String source)
	{
		url = source;
	}

	/**
	 * Called by onPause or surfaceDestroyed. Even if surfaceDestroyed is the
	 * first to be called, mIsSurfaceReady should still be set to 'true' during
	 * the call to onPause (in a usual scenario).
	 */
	public static void handlePause() {
		if (!mIsPaused && mIsSurfaceReady) {
			mIsPaused = true;
			nativePause();
		}
	}

	/**
	 * Called by onResume or surfaceCreated. An actual resume should be done
	 * only when the surface is ready. Note: Some Android variants may send
	 * multiple surfaceChanged events, so we don't need to resume every time we
	 * get one of those events, only if it comes after surfaceDestroyed
	 */
	public static void handleResume() {
		if (WlPlayer.mIsPaused && WlPlayer.mIsSurfaceReady && WlPlayer.mHasFocus) {
			WlPlayer.mIsPaused = false;
			WlPlayer.nativeResume();
		}
	}

	/* The native thread has finished */
	public static void handleNativeExit() {
		WlPlayer.mSDLThread = null;
		mSingleton.finish();
	}

	/**
	 * This method is called by SDL using JNI.
	 */
	public static void pollInputDevices() {
		if (WlPlayer.mSDLThread != null) {
			mJoystickHandler.pollInputDevices();
		}
	}

	// Check if a given device is considered a possible SDL joystick
	public static boolean isDeviceSDLJoystick(int deviceId) {
		InputDevice device = InputDevice.getDevice(deviceId);
		// We cannot use InputDevice.isVirtual before API 16, so let's accept
		// only nonnegative device ids (VIRTUAL_KEYBOARD equals -1)
		if ((device == null) || (deviceId < 0)) {
			return false;
		}
		int sources = device.getSources();
		return (((sources & InputDevice.SOURCE_CLASS_JOYSTICK) == InputDevice.SOURCE_CLASS_JOYSTICK)
				|| ((sources & InputDevice.SOURCE_DPAD) == InputDevice.SOURCE_DPAD)
				|| ((sources & InputDevice.SOURCE_GAMEPAD) == InputDevice.SOURCE_GAMEPAD));
	}

	// Joystick glue code, just a series of stubs that redirect to the
	// SDLJoystickHandler instance
	public static boolean handleJoystickMotionEvent(MotionEvent event) {
		return mJoystickHandler.handleMotionEvent(event);
	}

	/**
	 * This method is called by SDL using JNI.
	 */
	public static Surface getNativeSurface() {
		return WlPlayer.mSurface.getNativeSurface();
	}

	// Audio

	/**
	 * This method is called by SDL using JNI.
	 */
	public static int audioOpen(int sampleRate, boolean is16Bit, boolean isStereo, int desiredFrames) {
		int channelConfig = isStereo ? AudioFormat.CHANNEL_CONFIGURATION_STEREO
				: AudioFormat.CHANNEL_CONFIGURATION_MONO;
		int audioFormat = is16Bit ? AudioFormat.ENCODING_PCM_16BIT : AudioFormat.ENCODING_PCM_8BIT;
		int frameSize = (isStereo ? 2 : 1) * (is16Bit ? 2 : 1);

		Log.v("wlfm", "SDL audio: wanted " + (isStereo ? "stereo" : "mono") + " " + (is16Bit ? "16-bit" : "8-bit") + " "
				+ (sampleRate / 1000f) + "kHz, " + desiredFrames + " frames buffer");

		// Let the user pick a larger buffer if they really want -- but ye
		// gods they probably shouldn't, the minimums are horrifyingly high
		// latency already
		desiredFrames = Math.max(desiredFrames,
				(AudioTrack.getMinBufferSize(sampleRate, channelConfig, audioFormat) + frameSize - 1) / frameSize);

		if (mAudioTrack == null) {
			mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfig, audioFormat,
					desiredFrames * frameSize, AudioTrack.MODE_STREAM);

			// Instantiating AudioTrack can "succeed" without an exception and
			// the track may still be invalid
			// Ref:
			// https://android.googlesource.com/platform/frameworks/base/+/refs/heads/master/media/java/android/media/AudioTrack.java
			// Ref:
			// http://developer.android.com/reference/android/media/AudioTrack.html#getState()

			if (mAudioTrack.getState() != AudioTrack.STATE_INITIALIZED) {
				Log.e("wlfm", "Failed during initialization of Audio Track");
				mAudioTrack = null;
				return -1;
			}

			mAudioTrack.play();
		}

		Log.v("wlfm",
				"SDL audio: got " + ((mAudioTrack.getChannelCount() >= 2) ? "stereo" : "mono") + " "
						+ ((mAudioTrack.getAudioFormat() == AudioFormat.ENCODING_PCM_16BIT) ? "16-bit" : "8-bit") + " "
						+ (mAudioTrack.getSampleRate() / 1000f) + "kHz, " + desiredFrames + " frames buffer");

		return 0;
	}

	/**
	 * This method is called by SDL using JNI.
	 */
	public static void audioWriteShortBuffer(short[] buffer) {
		for (int i = 0; i < buffer.length;) {
			int result = mAudioTrack.write(buffer, i, buffer.length - i);
			if (result > 0) {
				i += result;
			} else if (result == 0) {
				try {
					Thread.sleep(1);
				} catch (InterruptedException e) {
					// Nom nom
				}
			} else {
				Log.w("wlfm", "SDL audio: error return from write(short)");
				return;
			}
		}
	}

	/**
	 * This method is called by SDL using JNI.
	 */
	public static void audioWriteByteBuffer(byte[] buffer) {
		for (int i = 0; i < buffer.length;) {
			int result = mAudioTrack.write(buffer, i, buffer.length - i);
			if (result > 0) {
				i += result;
			} else if (result == 0) {
				try {
					Thread.sleep(1);
				} catch (InterruptedException e) {
					// Nom nom
				}
			} else {
				Log.w("wlfm", "SDL audio: error return from write(byte)");
				return;
			}
		}
	}

	/** This method is called by SDL using JNI. */
	public static void audioClose() {
		if (mAudioTrack != null) {
			mAudioTrack.stop();
			mAudioTrack.release();
			mAudioTrack = null;
		}
	}

	/** This method is called by SDL using JNI. */
	public static void captureClose() {
		if (mAudioRecord != null) {
			mAudioRecord.stop();
			mAudioRecord.release();
			mAudioRecord = null;
		}
	}
	
	/**
     * This method is called by SDL using JNI.
     */
    public static boolean sendMessage(int command, int param) {
        return false;
    }

	/**
	 * This method is called by SDL using JNI.
	 */
	public static int captureOpen(int sampleRate, boolean is16Bit, boolean isStereo, int desiredFrames) {
		int channelConfig = isStereo ? AudioFormat.CHANNEL_CONFIGURATION_STEREO
				: AudioFormat.CHANNEL_CONFIGURATION_MONO;
		int audioFormat = is16Bit ? AudioFormat.ENCODING_PCM_16BIT : AudioFormat.ENCODING_PCM_8BIT;
		int frameSize = (isStereo ? 2 : 1) * (is16Bit ? 2 : 1);

		Log.v("wlfm", "SDL capture: wanted " + (isStereo ? "stereo" : "mono") + " " + (is16Bit ? "16-bit" : "8-bit")
				+ " " + (sampleRate / 1000f) + "kHz, " + desiredFrames + " frames buffer");

		// Let the user pick a larger buffer if they really want -- but ye
		// gods they probably shouldn't, the minimums are horrifyingly high
		// latency already
		desiredFrames = Math.max(desiredFrames,
				(AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat) + frameSize - 1) / frameSize);

		if (mAudioRecord == null) {
			mAudioRecord = new AudioRecord(MediaRecorder.AudioSource.DEFAULT, sampleRate, channelConfig, audioFormat,
					desiredFrames * frameSize);

			// see notes about AudioTrack state in audioOpen(), above. Probably
			// also applies here.
			if (mAudioRecord.getState() != AudioRecord.STATE_INITIALIZED) {
				Log.e("wlfm", "Failed during initialization of AudioRecord");
				mAudioRecord.release();
				mAudioRecord = null;
				return -1;
			}

			mAudioRecord.startRecording();
		}

		Log.v("wlfm",
				"SDL capture: got " + ((mAudioRecord.getChannelCount() >= 2) ? "stereo" : "mono") + " "
						+ ((mAudioRecord.getAudioFormat() == AudioFormat.ENCODING_PCM_16BIT) ? "16-bit" : "8-bit") + " "
						+ (mAudioRecord.getSampleRate() / 1000f) + "kHz, " + desiredFrames + " frames buffer");

		return 0;
	}
	
	/**
     * This method is called by SDL using JNI.
     * @return an array which may be empty but is never null.
     */
    public static int[] inputGetInputDeviceIds(int sources) {
        int[] ids = InputDevice.getDeviceIds();
        int[] filtered = new int[ids.length];
        int used = 0;
        for (int i = 0; i < ids.length; ++i) {
            InputDevice device = InputDevice.getDevice(ids[i]);
            if ((device != null) && ((device.getSources() & sources) != 0)) {
                filtered[used++] = device.getId();
            }
        }
        return Arrays.copyOf(filtered, used);
    }

	/** This method is called by SDL using JNI. */
	public static int captureReadShortBuffer(short[] buffer, boolean blocking) {
		// !!! FIXME: this is available in API Level 23. Until then, we always
		// block. :(
		// return mAudioRecord.read(buffer, 0, buffer.length, blocking ?
		// AudioRecord.READ_BLOCKING : AudioRecord.READ_NON_BLOCKING);
		return mAudioRecord.read(buffer, 0, buffer.length);
	}

	/** This method is called by SDL using JNI. */
	public static int captureReadByteBuffer(byte[] buffer, boolean blocking) {
		// !!! FIXME: this is available in API Level 23. Until then, we always
		// block. :(
		// return mAudioRecord.read(buffer, 0, buffer.length, blocking ?
		// AudioRecord.READ_BLOCKING : AudioRecord.READ_NON_BLOCKING);
		return mAudioRecord.read(buffer, 0, buffer.length);
	}
	
	public static void setOnPlayerInfoListener(OnPlayerInfoListener onInfoListener)
	{
		onPlayerInfoListener = onInfoListener;
	}

	// C functions we call
	public static native int nativeInit(String arguments);

	public static native void nativeLowMemory();

	public static native void nativeQuit();

	public static native void nativePause();

	public static native void nativeResume();

	public static native void onNativeDropFile(String filename);

	public static native void onNativeResize(int x, int y, int format, float rate);

	public static native int onNativePadDown(int device_id, int keycode);

	public static native int onNativePadUp(int device_id, int keycode);

	public static native void onNativeJoy(int device_id, int axis, float value);

	public static native void onNativeHat(int device_id, int hat_id, int x, int y);

	public static native void onNativeKeyDown(int keycode);

	public static native void onNativeKeyUp(int keycode);

	public static native void onNativeKeyboardFocusLost();

	public static native void onNativeMouse(int button, int action, float x, float y);

	public static native void onNativeTouch(int touchDevId, int pointerFingerId, int action, float x, float y, float p);

	public static native void onNativeAccel(float x, float y, float z);

	public static native void onNativeSurfaceChanged();

	public static native void onNativeSurfaceDestroyed();

	public static native int nativeAddJoystick(int device_id, String name, int is_accelerometer, int nbuttons,
			int naxes, int nhats, int nballs);

	public static native int nativeRemoveJoystick(int device_id);

	public static native String nativeGetHint(String name);
	
	/**my jni call method */
	public static native void wlStart();
	//暫停
	public static native void wlPause();
	//播放
	public static native void wlPlay();
	//得到總的時長
	public static native int wlDuration();
	//seek 到幾秒 0:成功,1:失敗
	public static native int wlSeekTo(int sec);
	//釋放空間
	public static native void wlRealease();
	//得到當前播放的
	public static native int wlNowTime();
	//是否加載中
	public static native int wlIsInit();
	//是否停止
	public static native int wlIsRelease();
	
	
	//準備開始回調
	public static void onPrepard()
	{
		if(mOnPlayerPrepard != null)
		{
			mOnPlayerPrepard.onPrepard();
		}
	}
	
	
	
	//準備播放回調
	public interface OnPlayerPrepard
	{
		void onPrepard();
	}
	//開始播放
	public static void prePard()
	{
		mSDLThread = new Thread(new Runnable() {
			
			@Override
			public void run() {
				// TODO Auto-generated method stub
				System.out.println("url:" + url);
				WlPlayer.nativeInit(url);
			}
		}, "mainThread");
		mSDLThread.start();
	}
	
	public static void next(String u)
	{
		if(wlIsInit() == -1)
		{
			if(onErrorListener != null)
			{
				onErrorListener.onError(0x2001, "player is initing, please try later!");
			}
			return;
		}
		url = u;
		mSDLThread = new Thread(new Runnable() {
					
			@Override
			public void run() {
				// TODO Auto-generated method stub
				WlPlayer.wlRealease();
				WlPlayer.nativeInit(url);
			}
		}, "mainThread");
		mSDLThread.start();
	}
	
	public static void release()
	{
		if(wlIsRelease() != -1)
		{
			mSDLThread = new Thread(new Runnable() {
				
				@Override
				public void run() {
					// TODO Auto-generated method stub
					WlPlayer.wlRealease();
				}
			}, "mainThread");
			mSDLThread.start();
		}
		else
		{
			if(onErrorListener != null)
			{
				onErrorListener.onError(0x2002, "player is already release!");
			}
		}
	}
	//
	public interface OnPlayerInfoListener
	{
		void onLoad();
		void onPlay();
	}
	
	public interface OnCompleteListener
	{
		void onConplete();
	}
	
	public interface OnErrorListener
	{
		void onError(int code, String msg);
	}
	
	//播放完成
	public static void onCompleted()
	{
		if(onCompleteListener != null)
		{
			onCompleteListener.onConplete();
		}
	}
	
	//播放出錯
	public static void onError(int code, String msg)
	{
		if(onErrorListener != null)
		{
			onErrorListener.onError(code, msg);
		}
	}
	
	//加載中
	public static void onLoad()
	{
		if(onPlayerInfoListener != null)
		{
			onPlayerInfoListener.onLoad();
		}
	}
	
	//播放中
	public static void onPlay()
	{
		if(onPlayerInfoListener != null)
		{
			onPlayerInfoListener.onPlay();
		}
	}
	
}



/*
 * A null joystick handler for API level < 12 devices (the accelerometer is
 * handled separately)
 */
class SDLJoystickHandler {

	/**
	 * Handles given MotionEvent.
	 * 
	 * @param event
	 *            the event to be handled.
	 * @return if given event was processed.
	 */
	public boolean handleMotionEvent(MotionEvent event) {
		return false;
	}

	/**
	 * Handles adding and removing of input devices.
	 */
	public void pollInputDevices() {
	}
}

/* Actual joystick functionality available for API >= 12 devices */
class SDLJoystickHandler_API12 extends SDLJoystickHandler {

	static class SDLJoystick {
		public int device_id;
		public String name;
		public ArrayList<InputDevice.MotionRange> axes;
		public ArrayList<InputDevice.MotionRange> hats;
	}

	static class RangeComparator implements Comparator<InputDevice.MotionRange> {
		@Override
		public int compare(InputDevice.MotionRange arg0, InputDevice.MotionRange arg1) {
			return arg0.getAxis() - arg1.getAxis();
		}
	}

	private ArrayList<SDLJoystick> mJoysticks;

	public SDLJoystickHandler_API12() {

		mJoysticks = new ArrayList<SDLJoystick>();
	}

	@Override
	public void pollInputDevices() {
		int[] deviceIds = InputDevice.getDeviceIds();
		// It helps processing the device ids in reverse order
		// For example, in the case of the XBox 360 wireless dongle,
		// so the first controller seen by SDL matches what the receiver
		// considers to be the first controller

		for (int i = deviceIds.length - 1; i > -1; i--) {
			SDLJoystick joystick = getJoystick(deviceIds[i]);
			if (joystick == null) {
				joystick = new SDLJoystick();
				InputDevice joystickDevice = InputDevice.getDevice(deviceIds[i]);
				if (WlPlayer.isDeviceSDLJoystick(deviceIds[i])) {
					joystick.device_id = deviceIds[i];
					joystick.name = joystickDevice.getName();
					joystick.axes = new ArrayList<InputDevice.MotionRange>();
					joystick.hats = new ArrayList<InputDevice.MotionRange>();

					List<InputDevice.MotionRange> ranges = joystickDevice.getMotionRanges();
					Collections.sort(ranges, new RangeComparator());
					for (InputDevice.MotionRange range : ranges) {
						if ((range.getSource() & InputDevice.SOURCE_CLASS_JOYSTICK) != 0) {
							if (range.getAxis() == MotionEvent.AXIS_HAT_X
									|| range.getAxis() == MotionEvent.AXIS_HAT_Y) {
								joystick.hats.add(range);
							} else {
								joystick.axes.add(range);
							}
						}
					}

					mJoysticks.add(joystick);
					WlPlayer.nativeAddJoystick(joystick.device_id, joystick.name, 0, -1, joystick.axes.size(),
							joystick.hats.size() / 2, 0);
				}
			}
		}

		/* Check removed devices */
		ArrayList<Integer> removedDevices = new ArrayList<Integer>();
		for (int i = 0; i < mJoysticks.size(); i++) {
			int device_id = mJoysticks.get(i).device_id;
			int j;
			for (j = 0; j < deviceIds.length; j++) {
				if (device_id == deviceIds[j])
					break;
			}
			if (j == deviceIds.length) {
				removedDevices.add(Integer.valueOf(device_id));
			}
		}

		for (int i = 0; i < removedDevices.size(); i++) {
			int device_id = removedDevices.get(i).intValue();
			WlPlayer.nativeRemoveJoystick(device_id);
			for (int j = 0; j < mJoysticks.size(); j++) {
				if (mJoysticks.get(j).device_id == device_id) {
					mJoysticks.remove(j);
					break;
				}
			}
		}
	}

	protected SDLJoystick getJoystick(int device_id) {
		for (int i = 0; i < mJoysticks.size(); i++) {
			if (mJoysticks.get(i).device_id == device_id) {
				return mJoysticks.get(i);
			}
		}
		return null;
	}

	@Override
	public boolean handleMotionEvent(MotionEvent event) {
		if ((event.getSource() & InputDevice.SOURCE_JOYSTICK) != 0) {
			int actionPointerIndex = event.getActionIndex();
			int action = event.getActionMasked();
			switch (action) {
			case MotionEvent.ACTION_MOVE:
				SDLJoystick joystick = getJoystick(event.getDeviceId());
				if (joystick != null) {
					for (int i = 0; i < joystick.axes.size(); i++) {
						InputDevice.MotionRange range = joystick.axes.get(i);
						/* Normalize the value to -1...1 */
						float value = (event.getAxisValue(range.getAxis(), actionPointerIndex) - range.getMin())
								/ range.getRange() * 2.0f - 1.0f;
						WlPlayer.onNativeJoy(joystick.device_id, i, value);
					}
					for (int i = 0; i < joystick.hats.size(); i += 2) {
						int hatX = Math.round(event.getAxisValue(joystick.hats.get(i).getAxis(), actionPointerIndex));
						int hatY = Math
								.round(event.getAxisValue(joystick.hats.get(i + 1).getAxis(), actionPointerIndex));
						WlPlayer.onNativeHat(joystick.device_id, i / 2, hatX, hatY);
					}
				}
				break;
			default:
				break;
			}
		}
		return true;
	}
}

class SDLGenericMotionListener_API12 implements View.OnGenericMotionListener {
	// Generic Motion (mouse hover, joystick...) events go here
	@Override
	public boolean onGenericMotion(View v, MotionEvent event) {
		float x, y;
		int action;

		switch (event.getSource()) {
		case InputDevice.SOURCE_JOYSTICK:
		case InputDevice.SOURCE_GAMEPAD:
		case InputDevice.SOURCE_DPAD:
			return WlPlayer.handleJoystickMotionEvent(event);

		case InputDevice.SOURCE_MOUSE:
			action = event.getActionMasked();
			switch (action) {
			case MotionEvent.ACTION_SCROLL:
				x = event.getAxisValue(MotionEvent.AXIS_HSCROLL, 0);
				y = event.getAxisValue(MotionEvent.AXIS_VSCROLL, 0);
				WlPlayer.onNativeMouse(0, action, x, y);
				return true;

			case MotionEvent.ACTION_HOVER_MOVE:
				x = event.getX(0);
				y = event.getY(0);

				WlPlayer.onNativeMouse(0, action, x, y);
				return true;

			default:
				break;
			}
			break;

		default:
			break;
		}

		// Event was not managed
		return false;
	}
	
}

SDLSurface.java 代碼:

 

 

package com.ywl5320.wlsdk.player;

import android.content.Context;
import android.content.pm.ActivityInfo;
import android.graphics.PixelFormat;
import android.hardware.Sensor;
import android.hardware.SensorEvent;
import android.hardware.SensorEventListener;
import android.hardware.SensorManager;
import android.os.Build;
import android.util.AttributeSet;
import android.util.Log;
import android.view.*;

/**
 * @author ywl
 *
 */
public class SDLSurface extends SurfaceView implements SurfaceHolder.Callback{

	private OnSurfacePrepard onSurfacePrepard;
	
	// Sensors
	protected static Display mDisplay;

	// Keep track of the surface size to normalize touch events
	protected static float mWidth, mHeight;

	// Startup
	public SDLSurface(Context context) {
		this(context, null);
		
	}

	public SDLSurface(Context context, AttributeSet attrs)  
    {  
        super(context, attrs);
        getHolder().addCallback(this);

		setFocusable(true);
		setFocusableInTouchMode(true);
		requestFocus();

		mDisplay = ((WindowManager) context.getSystemService(Context.WINDOW_SERVICE)).getDefaultDisplay();

		// Some arbitrary defaults to avoid a potential division by zero
		mWidth = 1.0f;
		mHeight = 1.0f;
    } 
	
	

	public void setOnSurfacePrepard(OnSurfacePrepard onSurfacePrepard) {
		this.onSurfacePrepard = onSurfacePrepard;
	}

	public Surface getNativeSurface() {
		return getHolder().getSurface();
	}

	// Called when we have a valid drawing surface
	@Override
	public void surfaceCreated(SurfaceHolder holder) {
		Log.v("SDL", "surfaceCreated()");
		holder.setType(SurfaceHolder.SURFACE_TYPE_GPU);
	}

	// Called when we lose the surface
	@Override
	public void surfaceDestroyed(SurfaceHolder holder) {
		Log.v("SDL", "surfaceDestroyed()");
		// Call this *before* setting mIsSurfaceReady to 'false'
		WlPlayer.handlePause();
		WlPlayer.mIsSurfaceReady = false;
		WlPlayer.onNativeSurfaceDestroyed();
	}

	// Called when the surface is resized
	@Override
	public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
		Log.v("SDL", "surfaceChanged()");

		int sdlFormat = 0x15151002; // SDL_PIXELFORMAT_RGB565 by default
		switch (format) {
		case PixelFormat.A_8:
			Log.v("SDL", "pixel format A_8");
			break;
		case PixelFormat.LA_88:
			Log.v("SDL", "pixel format LA_88");
			break;
		case PixelFormat.L_8:
			Log.v("SDL", "pixel format L_8");
			break;
		case PixelFormat.RGBA_4444:
			Log.v("SDL", "pixel format RGBA_4444");
			sdlFormat = 0x15421002; // SDL_PIXELFORMAT_RGBA4444
			break;
		case PixelFormat.RGBA_5551:
			Log.v("SDL", "pixel format RGBA_5551");
			sdlFormat = 0x15441002; // SDL_PIXELFORMAT_RGBA5551
			break;
		case PixelFormat.RGBA_8888:
			Log.v("SDL", "pixel format RGBA_8888");
			sdlFormat = 0x16462004; // SDL_PIXELFORMAT_RGBA8888
			break;
		case PixelFormat.RGBX_8888:
			Log.v("SDL", "pixel format RGBX_8888");
			sdlFormat = 0x16261804; // SDL_PIXELFORMAT_RGBX8888
			break;
		case PixelFormat.RGB_332:
			Log.v("SDL", "pixel format RGB_332");
			sdlFormat = 0x14110801; // SDL_PIXELFORMAT_RGB332
			break;
		case PixelFormat.RGB_565:
			Log.v("SDL", "pixel format RGB_565");
			sdlFormat = 0x15151002; // SDL_PIXELFORMAT_RGB565
			break;
		case PixelFormat.RGB_888:
			Log.v("SDL", "pixel format RGB_888");
			// Not sure this is right, maybe SDL_PIXELFORMAT_RGB24 instead?
			sdlFormat = 0x16161804; // SDL_PIXELFORMAT_RGB888
			break;
		default:
			Log.v("SDL", "pixel format unknown " + format);
			break;
		}

		mWidth = width;
		mHeight = height;
		WlPlayer.onNativeResize(width, height, sdlFormat, mDisplay.getRefreshRate());
		Log.v("ywl5320", "Window size: " + width + "x" + height);

		boolean skip = false;
		int requestedOrientation = WlPlayer.mSingleton.getRequestedOrientation();

		if (requestedOrientation == ActivityInfo.SCREEN_ORIENTATION_UNSPECIFIED) {
			// Accept any
		} else if (requestedOrientation == ActivityInfo.SCREEN_ORIENTATION_PORTRAIT) {
			if (mWidth > mHeight) {
				skip = true;
			}
		} else if (requestedOrientation == ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE) {
			if (mWidth < mHeight) {
				skip = true;
			}
		}

		// Special Patch for Square Resolution: Black Berry Passport
		if (skip) {
			double min = Math.min(mWidth, mHeight);
			double max = Math.max(mWidth, mHeight);

			if (max / min < 1.20) {
				Log.v("SDL", "Don't skip on such aspect-ratio. Could be a square resolution.");
				skip = false;
			}
		}

		if (skip) {
			Log.v("SDL", "Skip .. Surface is not ready.");
			return;
		}

		// Set mIsSurfaceReady to 'true' *before* making a call to handleResume
		WlPlayer.mIsSurfaceReady = true;
		WlPlayer.onNativeSurfaceChanged();

		if (WlPlayer.mHasFocus) {
			WlPlayer.handleResume();
		}
		
		if(onSurfacePrepard != null)
		{
			onSurfacePrepard.onPrepard();
		}
	}
	
	public interface OnSurfacePrepard
	{
		void onPrepard();
	}
}

這樣就分離了默認的SDLActivity.java不用再集成activity了,可以單獨使用。注意裏面的本地方法的包名,這裏的包名改成了自己的,所以需要到SDL源碼中把相關的本地方法的包名給改成自己的,只需改這兩個文件裏面的就行:

 

 

 

這樣SDL算是移植成功了。

三、編寫我們的c文件,來解析流媒體文件

 

player.c源碼

 

#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include <jni.h>

#include "SDL.h"
#include "SDL_thread.h"

#include <android/log.h>
#define LOGI(FORMAT,...) __android_log_print(ANDROID_LOG_INFO,"ywl5320",FORMAT,##__VA_ARGS__);
#define LOGE(FORMAT,...) __android_log_print(ANDROID_LOG_ERROR,"ywl5320",FORMAT,##__VA_ARGS__);

#include "libavformat/avformat.h"
#include "libavcodec/avcodec.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
#include "libavutil/mathematics.h"
#include "libavutil/samplefmt.h"

#define SDL_AUDIO_BUFFER_SIZE 1024
#define AVCODEC_MAX_AUDIO_FRAME_SIZE 192000

int quit = 0;//0:播放 1:暫停 -1:結束
int play = 0;//0:init 1:播放
int isOver = 0;//0:沒有 1:完了

void release();
void onErrorMsg(int code, char *msg);

typedef struct PacketQueue
{
	AVPacketList *first_pkt, *last_pkt;
	int nb_packets;
	int size;
	SDL_mutex *mutex;
	SDL_cond *cond;
}PacketQueue;

void packet_queue_init(PacketQueue *q) {
	memset(q, 0, sizeof(PacketQueue));
	q->mutex = SDL_CreateMutex();
	q->cond = SDL_CreateCond();
}


int packet_queue_put(PacketQueue *q, AVPacket *pkt) {

	if(quit == -1)
		return -1;
	AVPacketList *pkt1;
	if (av_dup_packet(pkt) < 0) {
		return -1;
	}
	pkt1 = av_malloc(sizeof(AVPacketList));
	if (!pkt1)
		return -1;
	pkt1->pkt = *pkt;
	pkt1->next = NULL;

	SDL_LockMutex(q->mutex);

	if (!q->last_pkt)
		q->first_pkt = pkt1;
	else
		q->last_pkt->next = pkt1;
	q->last_pkt = pkt1;
	q->nb_packets++;
	q->size += pkt1->pkt.size;
	SDL_CondSignal(q->cond);

	SDL_UnlockMutex(q->mutex);
	return 0;
}

static int packet_queue_get(PacketQueue *q, AVPacket *pkt) {
	if(quit == -1)
		return -1;
	AVPacketList *pkt1;
	int ret;
	SDL_LockMutex(q->mutex);

	for (;;) {
		pkt1 = q->first_pkt;
		if (pkt1) {
			q->first_pkt = pkt1->next;
			if (!q->first_pkt)
				q->last_pkt = NULL;
			q->nb_packets--;
			q->size -= pkt1->pkt.size;
			*pkt = pkt1->pkt;
			av_free(pkt1);
			ret = 1;
			break;
		}else if(quit == -1){
			ret = -1;
			break;
		}
		else {
			SDL_CondWait(q->cond, q->mutex);
		}
	}
	SDL_UnlockMutex(q->mutex);
	return ret;
}

int getQueueSize(PacketQueue *q)
{
	return q->nb_packets;
}

typedef struct PlayerState
{
	char *url;
	SDL_Thread *decodeThread;
	AVFormatContext *pFormatCtx;//封裝格式上下文
	int audioStreamIndex;//音頻流索引(暫時處理一個音頻流)
	AVStream *audioStream;//音頻流
	int audioDuration;//時長
	int audioPts;
	SDL_AudioSpec wanted_spec, spec;
	AVCodecContext *audioCodecCtx;//音頻解碼器上下文
	AVCodec *audioCodec;//音頻解碼器
	PacketQueue audioq;//音頻隊列


}PlayerState;


PlayerState *playerState;


int audio_decode_frame(AVCodecContext *aCodecCtx, uint8_t *audio_buf, int buf_) {

	if(quit == -1)
	{
		return -1;
	}
	AVFrame *frame = av_frame_alloc();
	int data_size = 0;
	AVPacket pkt;
	int got_frame_ptr;

	SwrContext *swr_ctx;

	if (packet_queue_get(&playerState->audioq, &pkt) < 0)
		return -1;

	int ret = avcodec_send_packet(aCodecCtx, &pkt);
	if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
		return -1;

	ret = avcodec_receive_frame(aCodecCtx, frame);
	if (ret < 0 && ret != AVERROR_EOF)
		return -1;

	// 設置通道數或channel_layout
	if (frame->channels > 0 && frame->channel_layout == 0)
		frame->channel_layout = av_get_default_channel_layout(frame->channels);
	else if (frame->channels == 0 && frame->channel_layout > 0)
		frame->channels = av_get_channel_layout_nb_channels(frame->channel_layout);

	enum AVSampleFormat dst_format = AV_SAMPLE_FMT_S16;//av_get_packed_sample_fmt((AVSampleFormat)frame->format);

	//重採樣爲立體聲
	Uint64 dst_layout = AV_CH_LAYOUT_STEREO;
	// 設置轉換參數
	swr_ctx = swr_alloc_set_opts(NULL, dst_layout, dst_format, frame->sample_rate,
		frame->channel_layout, (enum AVSampleFormat)frame->format, frame->sample_rate, 0, NULL);
	if (!swr_ctx || swr_init(swr_ctx) < 0)
		return -1;

	// 計算轉換後的sample個數 a * b / c
	int dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, frame->sample_rate) + frame->nb_samples, frame->sample_rate, frame->sample_rate, AV_ROUND_INF);
	// 轉換,返回值爲轉換後的sample個數
	int nb = swr_convert(swr_ctx, &audio_buf, dst_nb_samples, (const uint8_t**)frame->data, frame->nb_samples);

	//根據佈局獲取聲道數
	int out_channels = av_get_channel_layout_nb_channels(dst_layout);
	data_size = out_channels * nb * av_get_bytes_per_sample(dst_format);
	playerState->audioPts = pkt.pts;
	av_packet_unref(&pkt);
	av_frame_free(&frame);
	swr_free(&swr_ctx);
	return data_size;
}


void audio_callback(void *userdata, Uint8 *stream, int len) {


	AVCodecContext *aCodecCtx = (AVCodecContext *)userdata;

	int len1, audio_size;

	static uint8_t audio_buff[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2];
	static unsigned int audio_buf_size = 0;
	static unsigned int audio_buf_index = 0;

	SDL_memset(stream, 0, len);

	if(quit == 1 || quit == -1)
	{
		SDL_PauseAudio(0);
		memset(audio_buff, 0, audio_buf_size);
		SDL_MixAudio(stream, audio_buff + audio_buf_index, len, 0);
		return;
	}

//	LOGI("pkt nums: %d    queue size: %d\n", playerState->audioq.nb_packets, playerState->audioq.size);
	while (len > 0)// 想設備發送長度爲len的數據
	{
		if (audio_buf_index >= audio_buf_size) // 緩衝區中無數據
		{
			// 從packet中解碼數據
			audio_size = audio_decode_frame(aCodecCtx, audio_buff, sizeof(audio_buff));
			if (audio_size < 0) // 沒有解碼到數據或出錯,填充0
			{
				audio_buf_size = 0;
				memset(audio_buff, 0, audio_buf_size);
			}
			else
				audio_buf_size = audio_size;

			audio_buf_index = 0;
		}
		len1 = audio_buf_size - audio_buf_index; // 緩衝區中剩下的數據長度
		if (len1 > len) // 向設備發送的數據長度爲len
			len1 = len;

		SDL_MixAudio(stream, audio_buff + audio_buf_index, len, SDL_MIX_MAXVOLUME);

		len -= len1;
		stream += len1;
		audio_buf_index += len1;
	}
}

int decodeFile(void *args)
{
//	LOGI("decode ...");
	if (avformat_open_input(&playerState->pFormatCtx, playerState->url, NULL, NULL) != 0)
	{
//		LOGE("can not open url:%s", playerState->url);
		onErrorMsg(0x1002, "can not open the source url!");
		return -1;
	}
//	LOGI("here ...");
	if (avformat_find_stream_info(playerState->pFormatCtx, NULL) < 0)
	{
//		LOGE("can not find streams from %s", playerState->url);
		onErrorMsg(0x1003, "can not find streams from the source url!");
		return -1;
	}
//	LOGI("here2 ...");
	int i = 0;
	for (; i < playerState->pFormatCtx->nb_streams; i++)
	{
		if (playerState->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
		{
			playerState->audioStreamIndex = i;
			break;
		}
	}
//	LOGI("here3 ...");
	if (playerState->audioStreamIndex == -1)
	{
//		LOGE("can not find audio streams from %s", playerState->url);
		onErrorMsg(0x1004, "can not find audio streams from the source url!");
		return -1;
	}

	playerState->audioStream = playerState->pFormatCtx->streams[playerState->audioStreamIndex];

	playerState->audioDuration = playerState->pFormatCtx->duration / 1000000;
//	LOGI("duration:%d", playerState->audioDuration);
//	int h = playerState->audioDuration / 3600;
//	int m = (playerState->audioDuration - 3600 * h) / 60;
//	int s = playerState->audioDuration - 3600 * h - m * 60;
//	LOGI("%02d:%02d:%02d", h, m, s);

	AVCodecContext* pCodecCtxOrg;
	pCodecCtxOrg = playerState->pFormatCtx->streams[playerState->audioStreamIndex]->codec; // codec context
	playerState->audioCodec = avcodec_find_decoder(pCodecCtxOrg->codec_id);
	if (!playerState->audioCodec)
	{
//		LOGE("can not find audio %d codecctx!", playerState->audioStreamIndex);
		onErrorMsg(0x1005, "can not find audio codecctx!");
		play = 1;
		return -1;
	}
//	LOGI("here4 ...");
	// 不直接使用從AVFormatContext得到的CodecContext,要複製一個
	playerState->audioCodecCtx = avcodec_alloc_context3(playerState->audioCodec);
	if (avcodec_copy_context(playerState->audioCodecCtx, pCodecCtxOrg) != 0)
	{
//		LOGE("Could not copy codec context!");
		onErrorMsg(0x1006, "Could not copy codec context!");
		return -1;
	}
	avcodec_free_context(&pCodecCtxOrg);

	//initaudio sdl

	playerState->wanted_spec.freq = playerState->audioCodecCtx->sample_rate;
	playerState->wanted_spec.format = AUDIO_S16SYS;
	playerState->wanted_spec.channels = 2;
	playerState->wanted_spec.silence = 0;
	playerState->wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE;
	playerState->wanted_spec.callback = audio_callback;
	playerState->wanted_spec.userdata = playerState->audioCodecCtx;
	if(SDL_OpenAudio(&playerState->wanted_spec, &playerState->spec) < 0)
	{
//		LOGE("sdl open audio failed:");
		onErrorMsg(0x1007, "sdl open audio failed!");
		return -1;
	}
	SDL_PauseAudio(0);

	if(avcodec_open2(playerState->audioCodecCtx, playerState->audioCodec, NULL) != 0)
	{
//		LOGE("open audio codec fail");
		onErrorMsg(0x1008, "open audio codec fail!");
		return -1;
	}
	onParpred();
	AVPacket packet;
	int index = 0;
	while (1)
	{
		if(quit == -1)
		{
			break;
		}
		if(play == 0)
		{
			continue;
		}
		if(playerState)
		{
			if (getQueueSize(&playerState->audioq) < 50)
			{
				int ret = av_read_frame(playerState->pFormatCtx, &packet);
				if (ret == 0)
				{
					isOver = 0;
					if (packet.stream_index == playerState->audioStreamIndex)
					{
						packet_queue_put(&playerState->audioq, &packet);
	//					LOGI("code %d", index++);
					}
					else
					{
						av_packet_unref(&packet);
					}
				}
				else if(ret == AVERROR_EOF)
				{
					isOver = 1;
					if(getQueueSize(&playerState->audioq) == 0)
					{
						quit = 1;
						onComplete();
						return 0;
					}
//					LOGE("right av_read_frame finished return %d", ret);
				}
				else
				{
//					LOGE("av_read_frame finished return %d", ret);
				}
			}
		}
	}
//	LOGI("here7 ...");

	return 0;
}

int avformat_interrupt_cb(void *ctx)
{
	if(quit == -1)
		return 1;
	return 0;
}


int main(int argc, char* args[])
{
//	LOGI(".............come from main............");
//	LOGI("input url: %s", args[1]);
	quit = 0;
	play = 0;
	if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER))
	{
		onErrorMsg(0x1001, "init sdl error!");
		return -1;
	}
	av_register_all();
	avformat_network_init();
	playerState = malloc(sizeof(PlayerState));
	packet_queue_init(&playerState->audioq);
	playerState->url = args[1];
	playerState->audioStreamIndex = -1;
	playerState->pFormatCtx = avformat_alloc_context();
	playerState->decodeThread = SDL_CreateThread(decodeFile, "decodeThread", NULL);
	playerState->pFormatCtx->interrupt_callback.callback = avformat_interrupt_cb;

	for(;;)
	{
		if(playerState)
		{
			if(quit == 0)
			{
				if(getQueueSize(&playerState->audioq) == 0)
				{
					if(isOver != 1)//退出
					{
//						LOGI("loading....");
						onLoad();
					}
				}
				else
				{
//					LOGI("plalying....");
					onPlay();
				}
			}
		}
		else{
			play = 1;
			return 0;
		}
		SDL_Delay(10);
	}
	play = 1;
	return 0;
}

void JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlStart(JNIEnv* env, jclass jcls)
{
	if(play == 0)
	{
		play = 1;
	}
}

void JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlPause(JNIEnv* env, jclass jcls)
{
//	LOGI("pause");
	if(quit != 1 && isOver != 1)
	{
		quit = 1;
		if(playerState && playerState->pFormatCtx)
		{
			av_read_pause(playerState->pFormatCtx);
		}
	}
}
void JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlPlay(JNIEnv* env, jclass jcls)
{
//	LOGI("play");
	if(quit != 0 && isOver != 1)
	{
		quit = 0;
		if(playerState && playerState->pFormatCtx)
		{
			av_read_play(playerState->pFormatCtx);
		}
	}
}

jint JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlDuration(JNIEnv *env, jclass jcls)
{
	if(playerState)
	{
		if(playerState->audioDuration > 0)
		{
			return playerState->audioDuration;
		}
	}
	return 0;
}

//
void JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlRealease(JNIEnv* env, jclass jcls)
{
//	LOGI("release");
	release();

}

jint JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlNowTime(JNIEnv* env, jclass jcls)
{
	if(playerState && playerState->audioStream && getQueueSize(&playerState->audioq) > 0 && playerState->audioStream->time_base.den > 0)
	{
		return playerState->audioPts / playerState->audioStream->time_base.den;
	}
	return 0;
}

int JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlSeekTo(JNIEnv* env, jclass jcls, jint secds)
{
//	LOGI("wlSeekTo%d", secds);
	if(playerState && playerState->audioStream && secds < playerState->audioDuration && isOver != 1)
	{
		quit = 1;
		if(av_seek_frame(playerState->pFormatCtx, playerState->audioStreamIndex, secds * playerState->audioStream->time_base.den, AVSEEK_FLAG_ANY) >= 0)
		{
			playerState->audioq.first_pkt = NULL;
			playerState->audioq.last_pkt = NULL;
			playerState->audioq.nb_packets = 0;
			playerState->audioq.size = 0;
		}
		quit = 0;
		return 0;
	}
	return -1;
}

jint JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlIsInit(JNIEnv* env, jclass jcls)
{
	if(play == 0)
	{
		return -1;
	}
	return 0;
}

jint JNICALL Java_com_ywl5320_wlsdk_player_WlPlayer_wlIsRelease(JNIEnv* env, jclass jcls)
{
	return quit;
}

void release()
{
	quit = -1;
	play = 1;
	SDL_CloseAudio();
	av_free(playerState);
	playerState = NULL;
	SDL_Quit();
}


void onErrorMsg(int code, char *msg)
{
	quit = -1;
	onError(code, msg);
	release();
}


裏面主要是從main方法進去,然後初始化FFmpeg和SDL並解析流媒體文件,其中解析流媒體文件是在SDL提供的子線程中進行的,這個文件是我整理後的,下面是剛開始的測試文件,更容易看清楚流程。
good_audio.c

 

 

#include <stdlib.h>
#include <stdio.h>
#include <time.h>

#include "SDL.h"
#include "SDL_thread.h"

#include <android/log.h>
#define LOGI(FORMAT,...) __android_log_print(ANDROID_LOG_INFO,"ywl5320",FORMAT,##__VA_ARGS__);
#define LOGE(FORMAT,...) __android_log_print(ANDROID_LOG_ERROR,"ywl5320",FORMAT,##__VA_ARGS__);

#include "libavformat/avformat.h"
#include "libavcodec/avcodec.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
#include "libavutil/mathematics.h"
#include "libavutil/samplefmt.h"

#define SDL_AUDIO_BUFFER_SIZE 1024
#define AVCODEC_MAX_AUDIO_FRAME_SIZE 192000

typedef struct PacketQueue
{
	AVPacketList *first_pkt, *last_pkt;
	int nb_packets;
	int size;
	SDL_mutex *mutex;
	SDL_cond *cond;
}PacketQueue;

PacketQueue audioq;
int quit = 0;

void packet_queue_init(PacketQueue *q) {
	memset(q, 0, sizeof(PacketQueue));
	q->mutex = SDL_CreateMutex();
	q->cond = SDL_CreateCond();
}


int packet_queue_put(PacketQueue *q, AVPacket *pkt) {

	AVPacketList *pkt1;
	if (av_dup_packet(pkt) < 0) {
		return -1;
	}
	pkt1 = av_malloc(sizeof(AVPacketList));
	if (!pkt1)
		return -1;
	pkt1->pkt = *pkt;
	pkt1->next = NULL;

	SDL_LockMutex(q->mutex);

	if (!q->last_pkt)
		q->first_pkt = pkt1;
	else
		q->last_pkt->next = pkt1;
	q->last_pkt = pkt1;
	q->nb_packets++;
	q->size += pkt1->pkt.size;
	SDL_CondSignal(q->cond);

	SDL_UnlockMutex(q->mutex);
	return 0;
}

static int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block) {
	AVPacketList *pkt1;
	int ret;

	SDL_LockMutex(q->mutex);

	for (;;) {

		if (quit) {
			ret = -1;
			break;
		}

		pkt1 = q->first_pkt;
		if (pkt1) {
			q->first_pkt = pkt1->next;
			if (!q->first_pkt)
				q->last_pkt = NULL;
			q->nb_packets--;
			q->size -= pkt1->pkt.size;
			*pkt = pkt1->pkt;
			av_free(pkt1);
			ret = 1;
			break;
		} else if (!block) {
			ret = 0;
			break;
		} else {
			SDL_CondWait(q->cond, q->mutex);
		}
	}
	SDL_UnlockMutex(q->mutex);
	return ret;
}

int getQueueSize(PacketQueue *q)
{
	return q->nb_packets;
}

int audio_decode_frame(AVCodecContext *aCodecCtx, uint8_t *audio_buf, int buf_size) {

	AVFrame *frame = av_frame_alloc();
	int data_size = 0;
	AVPacket pkt;
	int got_frame_ptr;

	SwrContext *swr_ctx;

	if (quit)
		return -1;
	if (packet_queue_get(&audioq, &pkt, 1) < 0)
		return -1;
	int ret = avcodec_decode_audio4(aCodecCtx, frame, &got_frame_ptr, &pkt);
	//int ret = avcodec_send_packet(aCodecCtx, &pkt);
	if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
		return -1;

	//ret = avcodec_receive_frame(aCodecCtx, frame);
	//if (ret < 0 && ret != AVERROR_EOF)
	//	return -1;

	// 設置通道數或channel_layout
	if (frame->channels > 0 && frame->channel_layout == 0)
		frame->channel_layout = av_get_default_channel_layout(frame->channels);
	else if (frame->channels == 0 && frame->channel_layout > 0)
		frame->channels = av_get_channel_layout_nb_channels(frame->channel_layout);

	enum AVSampleFormat dst_format = AV_SAMPLE_FMT_S16;//av_get_packed_sample_fmt((AVSampleFormat)frame->format);

	//重採樣爲立體聲
	Uint64 dst_layout = AV_CH_LAYOUT_STEREO;
	// 設置轉換參數
	swr_ctx = swr_alloc_set_opts(NULL, dst_layout, dst_format, frame->sample_rate,
		frame->channel_layout, (enum AVSampleFormat)frame->format, frame->sample_rate, 0, NULL);
	if (!swr_ctx || swr_init(swr_ctx) < 0)
		return -1;

	// 計算轉換後的sample個數 a * b / c
	int dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, frame->sample_rate) + frame->nb_samples, frame->sample_rate, frame->sample_rate, 1);
	// 轉換,返回值爲轉換後的sample個數
	int nb = swr_convert(swr_ctx, &audio_buf, dst_nb_samples, (const uint8_t**)frame->data, frame->nb_samples);

	//根據佈局獲取聲道數
	int out_channels = av_get_channel_layout_nb_channels(dst_layout);
	data_size = out_channels * nb * av_get_bytes_per_sample(dst_format);

	av_frame_free(&frame);
	swr_free(&swr_ctx);
	return data_size;
}


void audio_callback(void *userdata, Uint8 *stream, int len) {

	AVCodecContext *aCodecCtx = (AVCodecContext *)userdata;

	int len1, audio_size;

	static uint8_t audio_buff[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2];
	static unsigned int audio_buf_size = 0;
	static unsigned int audio_buf_index = 0;

	SDL_memset(stream, 0, len);
	if (getQueueSize(&audioq) > 0)
	{
		LOGI("pkt nums: %d    queue size: %d\n", audioq.nb_packets, audioq.size);
		while (len > 0)// 想設備發送長度爲len的數據
		{
			if (audio_buf_index >= audio_buf_size) // 緩衝區中無數據
			{
				// 從packet中解碼數據
				audio_size = audio_decode_frame(aCodecCtx, audio_buff, sizeof(audio_buff));
				if (audio_size < 0) // 沒有解碼到數據或出錯,填充0
				{
					audio_buf_size = 0;
					memset(audio_buff, 0, audio_buf_size);
				}
				else
					audio_buf_size = audio_size;

				audio_buf_index = 0;
			}
			len1 = audio_buf_size - audio_buf_index; // 緩衝區中剩下的數據長度
			if (len1 > len) // 向設備發送的數據長度爲len
				len1 = len;

			SDL_MixAudio(stream, audio_buff + audio_buf_index, len, SDL_MIX_MAXVOLUME);

			len -= len1;
			stream += len1;
			audio_buf_index += len1;
		}
	}
	else
	{
		LOGI("pkt nums: %d    queue size: %d\n", audioq.nb_packets, audioq.size);
		LOGI("play complete");
		SDL_CloseAudio();
		SDL_Quit();
	}
}

int main(int argc, char* args[])
{

	LOGI(".............come from main............");
	LOGI("input url: %s", args[1]);
	av_register_all();
	avformat_network_init();

	AVFormatContext *pFormatCtx;
	if (avformat_open_input(&pFormatCtx, args[1], NULL, NULL) != 0)
		return -1;

	if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
		return -1;

//	av_dump_format(pFormatCtx, 0, args[1], 0);

	int audioStream = -1;
	int i = 0;
	for (; i < pFormatCtx->nb_streams; i++)
	{
		if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
		{
			audioStream = i;
			break;
		}
	}

	if (audioStream == -1)
		return -1;

	AVCodecContext* pCodecCtxOrg;
	AVCodecContext* pCodecCtx;

	AVCodec* pCodec;

	pCodecCtxOrg = pFormatCtx->streams[audioStream]->codec; // codec context

	// 找到audio stream的 decoder
	pCodec = avcodec_find_decoder(pCodecCtxOrg->codec_id);

	if (!pCodec)
	{
		LOGE("Unsupported codec!")
		return -1;
	}

	// 不直接使用從AVFormatContext得到的CodecContext,要複製一個
	pCodecCtx = avcodec_alloc_context3(pCodec);
	if (avcodec_copy_context(pCodecCtx, pCodecCtxOrg) != 0)
	{
		LOGE("Could not copy codec context!");
		return -1;
	}

	SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER);
	// Set audio settings from codec info
	SDL_AudioSpec wanted_spec, spec;
	wanted_spec.freq = pCodecCtx->sample_rate;
	wanted_spec.format = AUDIO_S16SYS;
	wanted_spec.channels = pCodecCtx->channels;
	wanted_spec.silence = 0;
	wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE;
	wanted_spec.callback = audio_callback;
	wanted_spec.userdata = pCodecCtx;

	if(SDL_OpenAudio(&wanted_spec, &spec) < 0)
	{
		LOGE("Open audio failed:");
		return -1;
	}

	LOGI("come here...");

	avcodec_open2(pCodecCtx, pCodec, NULL);

	SDL_PauseAudio(0);

	AVPacket packet;
	while (1)
	{
		if (getQueueSize(&audioq) < 50)
		{
			int ret = av_read_frame(pFormatCtx, &packet);
			if (ret >= 0)
			{
				if (packet.stream_index == audioStream)
					packet_queue_put(&audioq, &packet);
				else
					av_packet_unref(&packet);
			}
		}
	}

	avformat_close_input(&pFormatCtx);
	return 0;
}


視頻播放文件:

 

good_video.c

 

#include <stdlib.h>
#include <stdio.h>
#include <time.h>

#include "SDL.h"
#include "SDL_thread.h"

#include <android/log.h>
#define LOGI(FORMAT,...) __android_log_print(ANDROID_LOG_INFO,"ywl5320",FORMAT,##__VA_ARGS__);
#define LOGE(FORMAT,...) __android_log_print(ANDROID_LOG_ERROR,"ywl5320",FORMAT,##__VA_ARGS__);

#include "libavformat/avformat.h"
#include "libavcodec/avcodec.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
#include "libavutil/mathematics.h"
#include "libavutil/samplefmt.h"

static const int SDL_AUDIO_BUFFER_SIZE = 1024;
static const int MAX_AUDIO_FRAME_SIZE = 192000;

/*
隊列包
*/
typedef struct PackeQueue
{
	AVPacketList *first_pkt, *last_pkt;

	int data_size;
	int nb_pkts;

	SDL_mutex *mutex;
	SDL_cond *cond;

}PackeQueue;

void init_Queue(PackeQueue *queue)
{
	queue = (PackeQueue *)malloc(sizeof(PackeQueue));
	queue->mutex = SDL_CreateMutex();
	queue->cond = SDL_CreateCond();
}

int push_Queue(PackeQueue *queue, AVPacket *pkt)
{
	AVPacketList *pkt1;
	if (av_dup_packet(pkt) < 0) {
		return -1;
	}
	pkt1 = (AVPacketList *)av_malloc(sizeof(AVPacketList));
	if (!pkt1)
		return -1;
	pkt1->pkt = *pkt;
	pkt1->next = NULL;

	SDL_LockMutex(queue->mutex);

	if (!queue->last_pkt)//下一個爲空,表面裏面沒數據
	{
		queue->first_pkt = pkt1;
	}
	else
	{
		queue->last_pkt->next = pkt1;
	}

	queue->last_pkt = pkt1;
	queue->nb_pkts++;
	queue->data_size += pkt1->pkt.size;

	SDL_CondSignal(queue->cond);
	SDL_UnlockMutex(queue->mutex);
	return 0;
}

int pop_Queue(PackeQueue *queue, AVPacket *pkt)
{
	AVPacketList *pkt1;
	SDL_LockMutex(queue->mutex);
	pkt1 = queue->first_pkt;
	int ret = -1;
	for (;;)
	{
		if (pkt1)
		{
			queue->first_pkt = pkt1->next;
			queue->data_size -= pkt1->pkt.size;
			queue->nb_pkts--;
			*pkt = pkt1->pkt;
			av_free(pkt1);
			ret = 0;
			break;
		}
		else
		{
			SDL_CondWait(queue->cond, queue->mutex);
		}
	}
	SDL_UnlockMutex(queue->mutex);
	return ret;

}



int main(int argv, char* argc[])
{
	//1.註冊支持的文件格式及對應的codec
	av_register_all();
	avformat_network_init();

	//char* filenName = "http://live.g3proxy.lecloud.com/gslb?stream_id=lb_yxlm_1800&tag=live&ext=m3u8&sign=live_tv&platid=10&splatid=1009";
	//char *filenName = "jxtg3.mkv";


	// 2.打開video文件
	AVFormatContext* pFormatCtx = NULL;
	// 讀取文件頭,將格式相關信息存放在AVFormatContext結構體中
	if (avformat_open_input(&pFormatCtx, argc[1], NULL, NULL) != 0)
		return -1; // 打開失敗

	// 檢測文件的流信息
	if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
		return -1; // 沒有檢測到流信息 stream infomation

	// 在控制檯輸出文件信息
	av_dump_format(pFormatCtx, 0, argc[1], 0);

	//查找第一個視頻流 video stream
	int videoStream = -1;
	int i = 0;
	for (; i < pFormatCtx->nb_streams; i++)
	{
		if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
		{
			videoStream = i;
			break;
		}
	}

	if (videoStream == -1)
		return -1; // 沒有查找到視頻流video stream

	AVCodecContext* pCodecCtxOrg = NULL;
	AVCodecContext* pCodecCtx = NULL;

	AVCodec* pCodec = NULL;

	pCodecCtxOrg = pFormatCtx->streams[videoStream]->codec; // codec context

	// 找到video stream的 decoder
	pCodec = avcodec_find_decoder(pCodecCtxOrg->codec_id);

	if (!pCodec)
	{
//		cout << "Unsupported codec!" << endl;
		return -1;
	}

	// 不直接使用從AVFormatContext得到的CodecContext,要複製一個
	pCodecCtx = avcodec_alloc_context3(pCodec);
	if (avcodec_copy_context(pCodecCtx, pCodecCtxOrg) != 0)
	{
//		cout << "Could not copy codec context!" << endl;
		return -1;
	}

	// open codec
	if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0)
		return -1; // Could open codec

	AVFrame* pFrame = NULL;
	AVFrame* pFrameYUV = NULL;

	pFrame = av_frame_alloc();
	pFrameYUV = av_frame_alloc();

	// 使用的緩衝區的大小
	int numBytes = 0;
	uint8_t* buffer = NULL;

	numBytes = avpicture_get_size(AV_PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);
	buffer = (uint8_t*)av_malloc(numBytes * sizeof(uint8_t));

	avpicture_fill((AVPicture*)pFrameYUV, buffer, AV_PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);

	struct SwsContext* sws_ctx = NULL;
	sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,
			pCodecCtx->width, pCodecCtx->height, AV_PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);

	///////////////////////////////////////////////////////////////////////////
	//
	// SDL2.0
	//
	//////////////////////////////////////////////////////////////////////////
	SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER);
	SDL_Window* window = SDL_CreateWindow("FFmpeg Decode", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
			pCodecCtx->width, pCodecCtx->height, SDL_WINDOW_OPENGL);
	SDL_Renderer* renderer = SDL_CreateRenderer(window, -1, 0);
	SDL_Texture* bmp = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING,
		pCodecCtx->width, pCodecCtx->height);
	SDL_Rect rect;
	rect.x = 0;
	rect.y = 0;
	rect.w = pCodecCtx->width;
	rect.h = pCodecCtx->height;

	SDL_Event event;

	AVPacket packet;
	while (av_read_frame(pFormatCtx, &packet) >= 0)
	{
		if (packet.stream_index == videoStream)
		{
			int frameFinished = 0;
			avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
			if (frameFinished)
			{
				printf("pts:%d \n", pFrame->pkt_pts / 1000);




				sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0,
					pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);

				//SDL_UpdateTexture(bmp, &rect, pFrameYUV->data[0], pFrameYUV->linesize[0]);
				SDL_UpdateYUVTexture(bmp, &rect,
					pFrameYUV->data[0], pFrameYUV->linesize[0],
					pFrameYUV->data[1], pFrameYUV->linesize[2],
					pFrameYUV->data[2], pFrameYUV->linesize[1]);
				SDL_RenderClear(renderer);
				SDL_RenderCopy(renderer, bmp, &rect, &rect);
				SDL_RenderPresent(renderer);
				SDL_Delay(30);

			}
		}
		av_free_packet(&packet);

		SDL_PollEvent(&event);
		switch (event.type)
		{
		case SDL_QUIT:
			SDL_Quit();
			av_free(buffer);
			av_frame_free(&pFrame);
			av_frame_free(&pFrameYUV);

			avcodec_close(pCodecCtx);
			avcodec_close(pCodecCtxOrg);

			avformat_close_input(&pFormatCtx);
			return 0;
		}
	}


	av_free(buffer);
	av_frame_free(&pFrame);
	av_frame_free(&pFrameYUV);

	avcodec_close(pCodecCtx);
	avcodec_close(pCodecCtxOrg);

	avformat_close_input(&pFormatCtx);

	getchar();
	return 0;
}

 

最後這兩個文件是單獨播放音頻和視頻的源碼,至於音視頻同步還還沒有時間做,後面空了弄好了也會再分享一下。

總結:

通過上面的編碼,就能實現最簡單的播放器效果了。FFmpeg和SDL都是很好的源碼庫,但是其實在是龐大,我現在也只是窺其一隅,所以只能這樣先分享一下,後面還得不斷學習學習再學習,大家有好的思路或結構不妨也給大家說一下,大家一起進步。

博客中demo下載地址:有eclipse版本的和Android Studio library兩個版本

GitHub:FFmpeg-SDL2PlayerSDK

 

學習資料

FFmpeg學習:雷神的博客 雷霄驊(leixiaohua1020)的專欄

FFmpeg和SDL:An ffmpeg and SDL Tutorial

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章