Getting Started with WebRTC for Android

By Vivek Chanddru,Oct 16, 2016

Update: I have written this post much earlier and WebRTC has changed their APIs. I have updated the post to reflect the new API usage.

For the past few weeks, I have been tasked with doing something with WebRTC in my Android app. Seems quite simple and straight forward right? Who knew I would hit wall after wall for this simple yet not-so-simple task?

The main reason for this might be because there were no proper tutorials/guides, hell even documentation for using WebRTC in native Android application. Every time I search for “WebRTC tutorial for Android”, I could not find anything that is almost useful and complete for Android native app.(Or maybe, I should work on my Googling skills 😦 )

So here I am, set out to do a tutorial series on my own (with little to all help from Google, of course). This tutorial series is hugely based on the codelabs for WebRTC. Codelabs is a great place to get started with WebRTC for browsers. This series will be porting the same experience for native Android.

Part 1: Introduction to WebRTC (this article)
Part 2: Introduction to PeerConnection
Part 3: Peer-to-Peer Video Calling — Loopback
Part 4: Peer-to-Peer Video Calling with socket.io

So Let us begin.

Prerequisites

  1. You would need working WebRTC native code compiled (here for more info on compilation — will add a separate post on how to get the so files)
  2. Android Studio

Update: Now, WebRTC provides a way to create an aar file which wraps the so and jar files. You can refer here for more info on how to generate the build. (Credits to Antonis Tsakiridis/Restcomm for the wiki)

Download the latest version (11–8–2017) aar from here. I was able to generate the aar build easily thanks to the above link from Restcomm wiki.

Lets Get Started!

First, Add the WebRTC dependency to your build.gradle file

compile(name:'libwebrtc', ext:'aar')

You might have to add the below lines

repositories {
    flatDir {
        dirs ‘libs’
    }
}

Optionally, you can add the aar file as a module to your project. Sync your gradle file and voila! You now have WebRTC library attached to your application.

What are we going to do?

Since this is a getting started guide, Let us not go deep into how PeerConnection works or what is STUN/TURN/ICE and other such mumbo-jumbo. We will get to it soon. One step at a time. So let us go and find out how to get video from the Camera and show it in our screen (Using WebRTC apis)

Before I say anything, let me show you the code.

package xyz.vivekc.webrtccodelab;

import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;

import org.webrtc.AudioSource;
import org.webrtc.AudioTrack;
import org.webrtc.Camera1Enumerator;
import org.webrtc.Camera2Enumerator;
import org.webrtc.CameraEnumerator;
import org.webrtc.CameraVideoCapturer;
import org.webrtc.EglBase;
import org.webrtc.Logging;
import org.webrtc.MediaConstraints;
import org.webrtc.PeerConnectionFactory;
import org.webrtc.SurfaceViewRenderer;
import org.webrtc.VideoCapturer;
import org.webrtc.VideoCapturerAndroid;
import org.webrtc.VideoRenderer;
import org.webrtc.VideoSource;
import org.webrtc.VideoTrack;

public class MainActivity extends AppCompatActivity {
    private static final String TAG = "MainActivity";

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        //Initialize PeerConnectionFactory globals.
        //Params are context, initAudio,initVideo and videoCodecHwAcceleration
        PeerConnectionFactory.initializeAndroidGlobals(this, true, true, true);

        //Create a new PeerConnectionFactory instance.
        PeerConnectionFactory.Options options = new PeerConnectionFactory.Options();
        PeerConnectionFactory peerConnectionFactory = new PeerConnectionFactory(options);


        //Now create a VideoCapturer instance. Callback methods are there if you want to do something! Duh!
        VideoCapturer videoCapturerAndroid = createVideoCapturer();
        //Create MediaConstraints - Will be useful for specifying video and audio constraints. More on this later!
        MediaConstraints constraints = new MediaConstraints();

        //Create a VideoSource instance
        VideoSource videoSource = peerConnectionFactory.createVideoSource(videoCapturerAndroid);
        VideoTrack localVideoTrack = peerConnectionFactory.createVideoTrack("100", videoSource);

        //create an AudioSource instance
        AudioSource audioSource = peerConnectionFactory.createAudioSource(constraints);
        AudioTrack localAudioTrack = peerConnectionFactory.createAudioTrack("101", audioSource);

        //we will start capturing the video from the camera
        //params are width,height and fps
        videoCapturerAndroid.startCapture(1000, 1000, 30);

        //create surface renderer, init it and add the renderer to the track
        SurfaceViewRenderer videoView = (SurfaceViewRenderer) findViewById(R.id.surface_rendeer);
        videoView.setMirror(true);

        EglBase rootEglBase = EglBase.create();
        videoView.init(rootEglBase.getEglBaseContext(), null);

        localVideoTrack.addRenderer(new VideoRenderer(videoView));
    }


    private VideoCapturer createVideoCapturer() {
        VideoCapturer videoCapturer;
        videoCapturer = createCameraCapturer(new Camera1Enumerator(false));
        return videoCapturer;
    }

    private VideoCapturer createCameraCapturer(CameraEnumerator enumerator) {
        final String[] deviceNames = enumerator.getDeviceNames();

        // Trying to find a front facing camera!
        for (String deviceName : deviceNames) {
            if (enumerator.isFrontFacing(deviceName)) {
                VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);

                if (videoCapturer != null) {
                    return videoCapturer;
                }
            }
        }

        // We were not able to find a front cam. Look for other cameras
        for (String deviceName : deviceNames) {
            if (!enumerator.isFrontFacing(deviceName)) {
                VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
                if (videoCapturer != null) {
                    return videoCapturer;
                }
            }
        }

        return null;
    }
}

Understood anything? If yes, great. You can go ahead and do what you were doing before you stumbled upon this article. If not, read for more info below
The steps to display video stream from camera to view are,

  1. Create and initialize PeerConnectionFactory
  2. Create a VideoCapturer instance which uses the camera of the device
  3. Create a VideoSource from the Capturer
  4. Create a VideoTrack from the source
  5. Create a video renderer using a SurfaceViewRenderer view and add it to the VideoTrack instance

Create and initialize PeerConnectionFactory

First and foremost, you have to create a PeerConnectionFactory to use WebRTC in Android. It is like the foundation where everything is done upon.

//Initialize PeerConnectionFactory globals.
//Params are context, initAudio,initVideo and videoCodecHwAcceleration
PeerConnectionFactory.initializeAndroidGlobals(this, true, true, true);

//Create a new PeerConnectionFactory instance.
PeerConnectionFactory.Options options = new PeerConnectionFactory.Options();
PeerConnectionFactory peerConnectionFactory = new PeerConnectionFactory(options);

Here, We tell the WebRTC library to initialize with audio,video and video hardware acceleration enabled. Also, when creating a new PeerConnectionFactory, we can pass in an additional Options instance. This options instance allows us to set certain flags such as disableEncryption, disableNetworkMonitor and networkIgnoreMask.

Create a VideoCapturer

Now that we have a PeerConnectionFactory, We can go ahead and create a Capturer which takes the image/video from the device’s camera. The below method finds the first camera available for the app. (in most cases, it will return the front camera of the device)

private VideoCapturer createVideoCapturer() {
    VideoCapturer videoCapturer;
    videoCapturer = createCameraCapturer(new Camera1Enumerator(false));
    return videoCapturer;
}

private VideoCapturer createCameraCapturer(CameraEnumerator enumerator) {
    final String[] deviceNames = enumerator.getDeviceNames();

    // Trying to find a front facing camera!
    for (String deviceName : deviceNames) {
        if (enumerator.isFrontFacing(deviceName)) {
            VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);

            if (videoCapturer != null) {
                return videoCapturer;
            }
        }
    }

    // We were not able to find a front cam. Look for other cameras
    for (String deviceName : deviceNames) {
        if (!enumerator.isFrontFacing(deviceName)) {
            VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
            if (videoCapturer != null) {
                return videoCapturer;
            }
        }
    }

    return null;
}

Create VideoSource and VideoTrack from the Capturer

Now that we have the VideoCapturer, we can use this to create a VideoSource.

//Create a VideoSource instance
VideoSource videoSource = peerConnectionFactory.createVideoSource(videoCapturerAndroid, constraints);
VideoTrack localVideoTrack = peerConnectionFactory.createVideoTrack("100", videoSource);

Once the VideoSource was created from the PeerConnectionFactory instance, we use it to create a VideoTrack. The VideoTrack has a unique identifier (in this case, it is 100. It can be any String though)

Using SurfaceViewRenderer

We now have a VideoTrack which gives the stream of data from the device’s camera. If somehow we could display it on the screen, we could call it a day. WebRTC provides SurfaceViewRenderer for this purpose. It can be used to create a Renderer which is attached to the VideoTrack.

Before using the renderer, we have to start the VideoCapturer. We can do it by calling,

videoCapturerAndroid.startCapture(width, height, fps)

Once that is done, We can place our SurfaceViewRenderer in our XML layout or add it programmatically. Once our VideoCapturer instance is up and capturing our video, we can add the renderer to the VideoTrack that we created using,

//create surface renderer, init it and add the renderer to the track        SurfaceViewRenderer videoView = (SurfaceViewRenderer) findViewById(R.id.surface_rendeer); 
//create an EglBase instance
EglBase rootEglBase = EglBase.create();
//init the SurfaceViewRenderer using the eglContext
videoView.init(rootEglBase.getEglBaseContext(), null);
//a small method to provide a mirror effect to the SurfaceViewRenderer
videoView.setMirror(true);
//Add the renderer to the video track
localVideoTrack.addRenderer(new VideoRenderer(videoView));

Expecting more? That’s all! Just run the code and if all works out well, you should be seeing your happy face in your screen!

Bonus

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章