Real-time Audio and Video
  • Platform
  • Framework / Engine
  • iOS
  • Android
  • macOS
  • Windows
  • Linux
  • Web
  • WeChat Mini Program

Custom Video Capture

Update Time:2021-04-25 20:24

1 Introduction

When the ZEGO SDK's default video capture module cannot meet your application's requirement, the SDK allows you to customize the video capture process. By enabling custom video capture, you can manage the video capture on your own and send the captured video data to the SDK for the subsequent video encoding and stream publishing. With custom video capture enabled, you can still call the SDK's API to render the video for local preview, which means you don't have to implement the rendering yourself.

Listed below are some scenarios where enabling custom video capture is recommended:

  • Your application needs to use a third-party beauty SDK. In such cases, You can perform video capture, and video preprocessing using the beauty SDK and then pass the preprocessed video data to the ZEGO SDK for the subsequent video encoding and stream publishing.
  • Your application needs to perform another task that also needs to use the camera during the live streaming, which will cause a conflict with the ZEGO SDK's default video capturing module. For example, it needs to record a short video clip during live streaming.
  • Your application needs to live stream with video data captured from a non-camera video source, such as a video file, a screen to be shared, or live video game content.

2 Download the Sample Codes

Please follow the Sample Codes to download and run the sample code.

Refer to the source files in the directory "/ZegoExpressExample/CustomVideoCapture" for the code examples of this feature.

3 Prerequisites

Before enabling this feature, please make sure:

  • ZEGO Express SDK has been integrated in the project to realize basic real-time audio and video functions. For details, please refer to Quick Starts.
  • A project has been created in ZEGO Admin Console and a valid AppID and AppSign have been applied. For details, please refer to Console - Project Management.

4 Implementation Steps

The process of custom video capture is as follows:

  1. Create a "ZegoExpressEngine" instance.
  2. Enable custom video capture.
  3. Set up the event handler for custom video capture callbacks.
  4. Log in to the room and start publishing the stream, and the callback "onStart" will be triggered.
  5. On receiving the callback "onStart", start sending video frame data to the SDK.
  6. When the stream publishing stops, the callback "onStop" will be triggered. On receiving this callback, stop the video capture.

Refer to the API call sequence diagram below to implement custom video capture in your project:

4.1 Enable Custom Video Capture

First, create a ZegoCustomVideoCaptureConfig object and configure the bufferType attribute to specify the data type to be used to send the captured video frame data to the SDK. Then, call enableCustomVideoCapture to enable custom video capture.

ZegoCustomVideoCaptureConfig videoCaptureConfig = new ZegoCustomVideoCaptureConfig();
// Set the data type of the captured video frame to RAW_DATA.
videoCaptureConfig.bufferType = ZegoVideoBufferType.RAW_DATA;

engine.enableCustomVideoCapture(true, videoCaptureConfig, ZegoPublishChannel.MAIN);

4.2 Set up the Custom Video Capture Callback Handler

Call setCustomVideoCaptureHandler to set up an event handler (an object of the IZegoCustomVideoCaptureHandler class) to listen for and handle the callbacks related to custom video capture: onStart and onStop.

// Set the engine itself as the callback handler object
sdk.setCustomVideoCaptureHandler(new IZegoCustomVideoCaptureHandler() {
    @Override
    public void onStart(ZegoPublishChannel channel) {
        // On receiving the onStart callback, you can execute the tasks to start up your customized video capture process (e.g., turning on the camera) and start sending video frame data to the SDK.
        ...
    }
    @Override
    public void onStop(ZegoPublishChannel channel) {
        // On receiving the onStop callback, you can execute the tasks to stop your customized video capture process (e.g., turning off the camera) and stop sending video frame data to the SDK.
        ...
    }
});

4.3 Send the Captured Video Frame Data to the SDK

When you call startPreview to start the local preview or call startPublishingStream to start the stream publishing, the callback onStart will be triggered. On receiving this callback, you can start the video capture process and then call sendCustomVideoCaptureRawData or sendCustomVideoCaptureTextureData to send the captured video frame data to the SDK.

See below an example of sending the captured video frame data in "RAW_DATA" format to the SDK.

// Send the captured video frame in raw data to ZEGO SDK
if (byteBuffer == null) {
    byteBuffer = ByteBuffer.allocateDirect(data.length);
}
byteBuffer.put(data);
byteBuffer.flip();

mSDKEngine.sendCustomVideoCaptureRawData(byteBuffer, data.length, param, now);

When both the stream publishing and local preview are stopped, the callback onStop will be triggered. On receiving this callback, you can stop the video capture process, for example, turn off the camera.

5 API Reference

Method Description
enableCustomVideoCapture Enables or disables custom video capture.
setCustomVideoCaptureHandler Sets up the event handler for custom video capture callbacks.
onStart The callback triggered when the SDK is ready to receive captured video data.
onStop The callback triggered when the SDK stops receiving captured video data.
sendCustomVideoCaptureRawData Sends the captured video frame in raw data to the SDK.
sendCustomVideoCaptureTextureData Sends the captured video frame in texture data to the SDK.

6 FAQ

  1. How to use the "OpenGL Texture 2D" data type to transfer the captured video data?

    In step 4.1, set the bufferType attribute of the ZegoCustomVideoCaptureConfig object to GL_TEXTURE_2D, and then call the sendCustomVideoCaptureTextureData method to send the captured video data.

  2. With custom video capture enabled, the local preview is working fine, but the remote viewers see distorted video images. How to solve the problem?

    That is because the aspect ratio of the captured video is different from the aspect ratio of the SDK's default encoding resolution. For instance, if the aspect ratio of the captured video is 4:3, but the aspect ratio of the SDK's default encoding resolution is 16:9, you can solve the problem using either one of the following solutions:

    • Option 1: Change the video capture aspect ratio to 16:9.

    • Option 2: Call setVideoConfig to set the SDK's video encoding resolution to a resolution aspect ratio 4:3.

    • Option 3:Call setCustomVideoCaptureFillMode to set the video fill mode to "ASPECT_FIT" (as a result, the video will have black padding areas) or "ASPECT_FILL" (as a result, part of the video image will be cropped out).

  3. After the custom video capture is enabled, the video playback frame rate is not the same as the video capture frame rate. How to solve the problem?

    Call setVideoConfig to set the frame rate to be the same as the video capture frame rate (i.e., the frequency of calling sendCustomVideoCaptureRawData or sendCustomVideoCaptureTextureData)

  4. Does the SDK process the received video frame data synchronously or asynchronously?

    When the SDK receives the video frame data, it will first copy the data synchronously and then perform encoding and other operations asynchronously. The captured video frame data can be released once they are passed into the SDK.