Video Call
  • iOS
  • Android : Java
  • Web
  • Flutter
  • React Native
  • Electron
  • Unity3D
  • Cocos Creator
  • Windows
  • macOS
  • Linux
  • Overview
  • Develop your app
    • Quick start
    • Enhance basic feature
      • Use Tokens for authentication
      • Config your video based on scenes
      • Check the room connection status
      • Set up common video config
      • Set up common audio config
  • Best practices
    • Implement a video call for multiple users
    • Implement call invitation
    • Implement a live audio room
  • Upgrade using advanced features
    • Advanced features
      • Configure the video
        • Watermark the video/Take snapshots
        • Improve your appearance in the video
        • Beautify & Change the voice
        • Configure video codec
        • Output the video in H.265
      • Improve video quality
        • Configure bandwidth management
        • Test network and devices in advance
        • Visualize the sound level
        • Monitor streaming quality
      • Message signaling
        • Convey extra information using SEI
        • Broadcast real-time messages to a room
        • Quotas and limits
      • Play media files
        • Play media files
        • Play sound effects
      • Share the screen
      • Mix the video streams
      • Publish multiple video streams
      • Encrypt the video streams
      • Record video media data
    • Distincitve features
      • Join multiple rooms
      • Customize the video and audio
      • Set the voice hearing range
      • Transfer traffic via the cloud proxy server
      • Use the bit mask
      • Play streams via URL
      • Play a transparent gift special effect
      • AI Voice Changer
      • In-game voice chat
  • Upgrade using Add-on
  • Resources & Reference
    • SDK
    • Sample codes
    • API reference
      • Client APIs
      • Server APIs
    • Debugging
      • Error codes
      • Logging/Version number
    • FAQs
    • Key concepts
  • Documentation
  • Video Call
  • Upgrade using advanced features
  • Distincitve features
  • Customize the video and audio
  • Pre-process the video

Pre-process the video

Last updated:2023-11-14 14:43

Introduction

Custom video pre-processing refers to processing the captured video with the AI Effects SDK for features such as face beautification, and stickers, this might be needed when the Video Call SDK does not meet your development requirements.

video_pre-processing

Compared to the custom video capture feature, the custom video pre-processing does not require you to manage the device input sources, you only need to manipulate the raw data thrown out by the ZegoExpress-Video SDK, and then send it back to the ZegoExpress-Video SDK.

For more advanced features such as layer blending, we recommend you refer to the Customize how the video captures.

Prerequisites

Before you begin, make sure you complete the following:

  • Create a project in ZEGOCLOUD Admin Console and get the AppID and AppSign of your project.

  • Refer to the Quick Start doc to complete the SDK integration and basic function implementation.

Implementation process

The following diagram shows the API call sequence of the custom video pre-processing (taking the ZegoVideoBufferType.GL_TEXTURE_2D type as a example):

Enable the custom video pre-processing feature

  1. Create a ZegoCustomVideoProcessConfig object, and set the bufferType property for providing video frame data type to the SDK.

The SDK supports multiple types of video data. You need to specify and inform the SDK of the buffer type you are using.

The following are the data types the SDK supports for now, setting other enumeration values will not take effect.

  • SurfaceTexture: When the value of the bufferType is ZegoVideoBufferType.SURFACE_TEXTURE.
  • GLTexture2D: When the value of the bufferType is ZegoVideoBufferType.GL_TEXTURE_2D.
  1. To enable the custom video pre-processing, call the enableCustomVideoProcessing method before starting the local video preview and the stream publishing starts.
ZegoCustomVideoProcessConfig config = new ZegoCustomVideoProcessConfig();
// Select the [GL_TEXTURE_2D] type of video frame data.
config.bufferType = ZegoVideoBufferType.GL_TEXTURE_2D;

// Enable the custom video pre-processing feature.
express.enableCustomVideoProcessing(true, config, ZegoPublishChannel.MAIN);

Get and process the video raw data

After the SDK gets the raw video data, it sends out notifications through the onCapturedUnprocessedTextureData callback, and then you can process the video using the Effects SDK.

The following sample code shows the key method calling steps:

// Get the raw video data using related event callbacks.
// Listen for and handle related event callback.
express.setCustomVideoProcessHandler(new IZegoCustomVideoProcessHandler() {
    ...

    // Receive texture from ZegoExpressEngine
    @Override
    public void onCapturedUnprocessedTextureData(int textureID, int width, int height, long referenceTimeMillisecond, ZegoPublishChannel channel) {


        ZegoEffectsVideoFrameParam param = new ZegoEffectsVideoFrameParam();
        param.format = ZegoEffectsVideoFrameFormat.BGRA32;
        param.width = width;
        param.height = height;

        // Process buffer by ZegoEffects
        int processedTextureID = effects.processTexture(textureID, param);

        // Send processed texture
        express.sendCustomVideoProcessedTextureData(processedTextureID, width, height, referenceTimeMillisecond);
    }
}
Page Directory