Video Call
  • iOS : Objective-C
  • Android
  • Web
  • Flutter
  • React Native
  • Electron
  • Unity3D
  • Cocos Creator
  • Windows
  • macOS
  • Linux
  • Overview
  • Develop your app
    • Integrate the SDK
    • Implement a basic video call
    • Enhance basic feature
      • Use Tokens for authentication
      • Config your video based on scenes
      • Check the room connection status
      • Set up common video config
      • Set up common audio config
  • Best practices
    • Implement a video call for multiple users
  • Upgrade using advanced features
    • Advanced features
      • Configure the video
        • Watermark the video/Take snapshots
        • Improve your appearance in the video
        • Beautify & Change the voice
        • Configure video codec
        • Output the video in H.265
      • Improve video quality
        • Configure bandwidth management
        • Test network and devices in advance
        • Visualize the sound level
        • Monitor streaming quality
      • Message signaling
        • Convey extra information using SEI
        • Broadcast real-time messages to a room
        • Quotas and limits
      • Play media files
        • Play media files
        • Play sound effects
      • Share the screen
      • Mix the video streams
      • Publish multiple video streams
      • Encrypt the video streams
      • Record video media data
    • Distincitve features
      • Join multiple rooms
      • Customize the video and audio
      • Set the voice hearing range
      • Use the bit mask
      • Play streams via URL
      • Play a transparent gift special effect
  • Upgrade using Add-on
  • Resources & Reference
    • SDK
    • Sample codes
    • API reference
      • Client APIs
      • Server APIs
    • Debugging
      • Error codes
      • Logging/Version number
    • FAQs
    • Key concepts
  • Documentation
  • Video Call
  • Upgrade using Add-on
  • Add video filters with the AI Effects

Add video filters with the AI Effects

Last updated:2023-05-17 19:00

Introduction

The video call service provided by ZEGOCLOUD enables you to build audio and video applications through its flexible and easy-to-use API. Meanwhile, another ZEGOCLOUD add-on: AI Effects, is based on the AI algorithm, which enables you to implement a series of beautification features such as face beautification, face shape retouching, and other features.

These two services can be combined by using the two types of SDK together to create a real-time application with beautification features, which can be widely used in entertainment live streaming, live game streaming, video conference, and other live streaming scenarios.

Basic concepts

  • The ZegoExpress-Video SDK (Hereafter called the Express SDK): The video call SDK provided by ZEGOCLOUD. This SDK enables you to implement real-time audio and video features, in live streaming, live co-hosting streaming, and other scenarios.

  • The ZegoEffects SDK (Hereafter called the Effects SDK): The AI effects SDK provided by ZEGOCLOUD provides AI-based image rendering and algorithm abilities that enable you to implement face beautification, face shape retouch, background segmentation, face detection, and other features.

Prerequisites

  • ZEGO Express SDK has been integrated into the project to implement basic real-time audio and video functions. For details, please refer to Quick start .
  • A project has been created in ZEGOCLOUD Console and applied for a valid AppID and AppSign. For details, please refer to Console - Project Information .

Implementation process

The overall process of the combination of the two SDKs is as follows:

  1. Initialize the Express SDK and the Effects SDK (in no particular order).
  2. Obtain the raw video data. For details, see Custom video capture or Custom video pre-processing.
  3. Process the raw video data for beautification effects using the Effects SDK, and send the process video data, publish the video streams using the Express SDK.
  4. To adjust the AI effects during the stream publishing and playing operation, you can use the related functions of the Effects SDK to make changes in real time.

Initialize the Effects SDK

Import the resources and models of the Effects SDK

Before using the AI features of the Effects SDK, you need to import the resources or models required for those features.

// Specify the absolute path of the face recognition model, which is required for features such as face detection, eyes enlarging, and face slimming.
NSString *faceDetectionModelPath = [[NSBundle mainBundle] pathForResource:@"FaceDetectionModel" ofType:@"bundle"];
// Specify the absolute path of the portrait segmentation model, which is required for the portrait segmentation feature.
NSString *segmentationModelPath = [[NSBundle mainBundle] pathForResource:@"SegmentationModel" ofType:@"bundle"];

// Specify the absolute path of the stickers resource.
NSString *pendantBundlePath = [[NSBundle mainBundle] pathForResource:@"PendantResources" ofType:@"bundle"];
// Specify the absolute path of the skin tone enhancement resource.
NSString *whitenBundlePath = [[NSBundle mainBundle] pathForResource:@"FaceWhiteningResources" ofType:@"bundle"];

// Set the path list for resources or models, which must be called before calling the `create` method to create a ZegoEffects object.
[ZegoEffects setResources:@[faceDetectionModelPath, SegmentationModel, pendantBundlePath, whitenBundlePath]];

For all resources and models that the Effects SDK supports, see Import resources and models.

Create an Effects SDK object

To create an Effects SDK object, import the authentication file you obtained in the previous step Prerequisites.

// Pass in the authentication file you obtained.
ZegoEffects *effects = [ZegoEffects create:@"ABCDEFG"];
// Save the  Effects SDK instance. 
self.effects = effects;

Initialize the Effects SDK object

To initialize the Effects SDK object, call the initEnv method, and pass in the width and height of the incoming video data to be processed.

The following is the sample code of processing 1280 × 720 video images:

// Initialize the Effects SDK object, and pass in the width and height of the incoming video data to be processed.
[self.effects initEnv:CGSizeMake(1280, 720)];

Initialize the Express SDK

To initialize the Express SDK, call the createEngineWithProfile method.

ZegoEngineProfile *profile = [ZegoEngineProfile new];
// The AppID value you get from the ZEGO Admin console.
profile.appID = appID;  
// Use the general scenario.
profile.scenario = ZegoScenarioGeneral; 
// Create a ZegoExpressEngine instance and set eventHandler to [self]. If eventHandler is set to [nil], no callback will be received. You can set up the event handler later by calling the [-setEventHandler:] method.
[ZegoExpressEngine createEngineWithProfile:profile eventHandler:self];

Get the video raw data

The Express SDK provides two methods to get the video raw data:

  • Custom video pre-processing: The Express SDK collects video raw data internally, and the SDK sends out the captured video raw data through a callback.
  • Custom video capture: The video raw data is collected by developers themselves and then provided to the Express SDK.

The difference between the two methods is as follows, you can choose based on the actual situation:

Method Description Advantages
Custom video pre-processing
The ZEGO Express SDK collects the video raw data.
Together with the Express SDK and the Effects SDK, you do not need to manage the device input sources, but simply manipulate the raw data thrown by the Express SDK and pass it back to the Express SDK.
Custom video capture
Capture the video raw data by yourself.
When multiple manufacturers are taken on, services can be flexibly implemented and performance optimization can be improved.
  • Method 1: Custom video pre-processing

To capture the video raw data using this method, do the following:

a. Select the CVPixelBuffer video frame data type.

b. Call the enableCustomVideoProcessing method to enable the custom video pre-processing.

c. The SDK sends out the captured video raw data through the callback onCapturedUnprocessedCVPixelBuffer.

ZegoCustomVideoProcessConfig *processConfig = [[ZegoCustomVideoProcessConfig alloc] init];
// Select the [CVPixelBuffer] as video frame data type.
processConfig.bufferType = ZegoVideoBufferTypeCVPixelBuffer;

// Enable the custom video pre-processing.
[[ZegoExpressEngine sharedEngine] enableCustomVideoProcessing:YES config:processConfig channel:ZegoPublishChannelMain];

// Set [self] as the event handler object of the custom video pre-processing callback.
[[ZegoExpressEngine sharedEngine] setCustomVideoProcessHandler:self];

For details, see Custom video pre-processing.

  • Method 2: Custom video capture

For details, see Custom video capture.

Process the video data with the Effects SDK

In the corresponding callback for receiving the video raw data, the Effects SDK can be used to implement AI features.

After implemented the AI features, the Express SDK encodes and publishes the processed data and sends it to the cloud server. At this time, the remote user can play the processed video streams.

  • If you capture the video data using the custom video pre-processing method, to process the captured video data for AI features, do the following:
  1. Process the video data in the onCapturedUnprocessedCVPixelBuffer callback.
  2. After processed, send the video data back to the Express SDK.
// Take using the custom video pre-processing method as an example.
// Obtain the raw video data through a callback.
- (void)onCapturedUnprocessedCVPixelBuffer:(CVPixelBufferRef)buffer timestamp:(CMTime)timestamp channel:(ZegoPublishChannel)channel {
    ...
    // Custom video pre-processing: use the Effects SDK here.
    [self.effects processImageBuffer:buffer];

    // Send the processed buffer to the Express SDK.
    [[ZegoExpressEngine sharedEngine] sendCustomVideoProcessedCVPixelBuffer:output timestamp:timestamp channel:channel];
    ...
}
  • If you capture the video data using the custom video capture method, to process the captured video data for AI features, do the following:
  1. Process the video data in the corresponding callback (for details, see Custom video capture),
  2. Send the processed data back to the Express SDK.

Adjust the beautification features using the Effects SDK

To adjust the AI effects during the stream publishing and playing operation, use the Effects SDK to make changes in real time.

// Enable the skin tone enhancement feature.
[self.effects enableWhiten:YES];

// Set the whitening intensity. The value range is [0, 100], and the default value is 50.
ZegoEffectsWhitenParam *param = [[ZegoEffectsWhitenParam alloc] init];
param.intensity = 100;
[self.effects setWhitenParam:param];

For more AI features, see Face beautification, Face shape retouch, Backgroud segmentation, Face detection, Stickers, and Filters.

Page Directory