Video Call
  • iOS : Objective-C
  • Android
  • Web
  • Flutter
  • React Native
  • Electron
  • Unity3D
  • Cocos Creator
  • Windows
  • macOS
  • Linux
  • Overview
  • Develop your app
    • Integrate the SDK
    • Implement a basic video call
    • Enhance basic feature
      • Use Tokens for authentication
      • Config your video based on scenes
      • Check the room connection status
      • Set up common video config
      • Set up common audio config
  • Best practices
    • Implement a video call for multiple users
  • Upgrade using advanced features
    • Advanced features
      • Configure the video
        • Watermark the video/Take snapshots
        • Improve your appearance in the video
        • Beautify & Change the voice
        • Configure video codec
        • Output the video in H.265
      • Improve video quality
        • Configure bandwidth management
        • Test network and devices in advance
        • Visualize the sound level
        • Monitor streaming quality
      • Message signaling
        • Convey extra information using SEI
        • Broadcast real-time messages to a room
        • Quotas and limits
      • Play media files
        • Play media files
        • Play sound effects
      • Share the screen
      • Mix the video streams
      • Publish multiple video streams
      • Encrypt the video streams
      • Record video media data
    • Distincitve features
      • Join multiple rooms
      • Customize the video and audio
      • Set the voice hearing range
      • Use the bit mask
      • Play streams via URL
      • Play a transparent gift special effect
  • Upgrade using Add-on
  • Resources & Reference
    • SDK
    • Sample codes
    • API reference
      • Client APIs
      • Server APIs
    • Debugging
      • Error codes
      • Logging/Version number
    • FAQs
    • Key concepts
  • Documentation
  • Video Call
  • Upgrade using advanced features
  • Distincitve features
  • Customize the video and audio
  • Customize how the video captures

Customize how the video captures

Last updated:2023-11-14 14:43

Introduction

When the ZEGOCLOUD SDK's default video capture module cannot meet your application's requirement, the SDK allows you to customize the video capture process. By enabling custom video capture, you can manage the video capture on your own and send the captured video data to the SDK for the subsequent video encoding and stream publishing. With custom video capture enabled, you can still call the SDK's API to render the video for local preview, which means you don't have to implement the rendering yourself.

Listed below are some scenarios where enabling custom video capture is recommended:

  • Your application needs to use a third-party beauty SDK. In such cases, You can perform video capture, and video preprocessing using the beauty SDK and then pass the preprocessed video data to the ZEGOCLOUD SDK for the subsequent video encoding and stream publishing.
  • Your application needs to perform another task that also needs to use the camera during the live streaming, which will cause a conflict with the ZEGOCLOUD SDK's default video capturing module. For example, it needs to record a short video clip during live streaming.
  • Your application needs to live stream with video data captured from a non-camera video source, such as a video file, a screen to be shared, or live video game content.

Prerequisites

Before enabling this feature, please make sure:

  • ZEGO Express SDK has been integrated into the project to implement basic real-time audio and video functions. For details, please refer to Quick start .
  • A project has been created in ZEGOCLOUD Console and applied for a valid AppID and AppSign. For details, please refer to Console - Project Information .

Implementation process

The process of custom video capture is as follows:

  1. Create a ZegoExpressEngine instance.
  2. Enable custom video capture.
  3. Set up the event handler for custom video capture callbacks.
  4. Log in to the room and start publishing the stream, and the callback onStart will be triggered.
  5. On receiving the callback onStart, start sending video frame data to the SDK.
  6. When the stream publishing stops, the callback onStop will be triggered. On receiving this callback, stop the video capture.

Refer to the API call sequence diagram below to implement custom video capture in your project:

To enable the custom video capture, you will need to set the enablecamera method to True (default settings); Otherwise, there will be no video data when you publishing streams.

Enable custom video capture

First, create a ZegoCustomVideoCaptureConfig object and configure the bufferType attribute to specify the data type to be used to send the captured video frame data to the SDK. Then, call enableCustomVideoCapture to enable custom video capture.

Currently, the SDK supports two video buffer types for iOS: ZegoVideoBufferTypeCVPixelBuffer for the CVPixelBuffer data type, and ZegoVideoBufferTypeGLTexture2D for the GLTexture2D data type.

ZegoCustomVideoCaptureConfig *captureConfig = [[ZegoCustomVideoCaptureConfig alloc] init];
// Set the data type of the captured video frame to CVPixelBuffer.
captureConfig.bufferType = ZegoVideoBufferTypeCVPixelBuffer;

[[ZegoExpressEngine sharedEngine] enableCustomVideoCapture:YES config:captureConfig channel:ZegoPublishChannelMain];

Set up the custom video capture callback

Set the callback handler object

Set ViewController as the callback object of the custom video capture callbacks, which conforms to the ZegoCustomVideoCaptureHandler protocol.

@interface ViewController () <ZegoEventHandler, ZegoCustomVideoCaptureHandler>

    ......

@end

Call setCustomVideoCaptureHandler to set up an event handler to listen for and handle the callbacks related to custom video capture.

// Set the engine itself as the callback handler object
[[ZegoExpressEngine sharedEngine] setCustomVideoCaptureHandler:self];

Implement the callback handler methods

Implement the callback handler methods for the callbacks onStart and onStop.

When customizing the capturing of multiple streams, you will need to specify the stream-publishing channel throught the callback onStart, and onStop. Otherwise, only the main channel's event callbacks will be notified by default.

// Note: This callback is not called in the main thread. Please switch to the main thread if you need to operate the UI objects. 
- (void)onStart {

    // On receiving the onStart callback, you can execute the tasks to start up your customized video capture process and start sending video frame data to the SDK.

    // Here is an example of turning on the video capture device.
    [self.captureDevice startCapture];
}

// Note: This callback is not called in the main thread. Please switch to the main thread if you need to operate the UI objects. 
- (void)onStop {

    // On receiving the onStop callback, you can execute the tasks to stop your customized video capture process and stop sending video frame data to the SDK.

    // Here is an example of turning off the video capture device.
    [self.captureDevice stopCapture];
}

Send the captured video frame data to the SDK

When you call startPreview to start the local preview or call startPublishingStream to start the stream publishing, the callback onStart will be triggered. On receiving this callback, you can start the video capture process and then call sendCustomVideoCapturePixelBuffer or sendCustomVideoCaptureTextureData to send the captured video frame data to the SDK.

During the video capturing process, if you sent the encoded video frame data via the sendCustomVideoCaptureEncodedData method to the SDK, the SDK will not be able to preview. That is, at this point, you will need to set up the preview yourself.

See below an example of sending the captured video frame data in CVPixelBuffer format to the SDK.

#pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    CVPixelBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CMTime timeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

    // Send the captured video frame in CVPixelBuffer data to ZEGO SDK
    [[ZegoExpressEngine sharedEngine] sendCustomVideoCapturePixelBuffer:buffer timeStamp:timeStamp];
}

When both the stream publishing and local preview are stopped, the callback onStop will be triggered. On receiving this callback, you can stop the video capture process, for example, turn off the camera.

Optional: Set the state of the capturing device

To set the state of the custom capturing devices after receiving the onStart callback, call the setCustomVideoCaptureDeviceState method as needed.

For the stream player to obtain the state of the capturing device, listen for the onRemoteCameraStateUpdate callback.

If the stream publisher sets the device state to ZegoRemoteDeviceStateDisable or ZegoRemoteDeviceStateMute by calling the setCustomVideoCaptureDeviceState method, then the stream player can't receive event notification through the callback onRemoteCameraStateUpdate.

When the stream publisher turns off the camera with the enableCamera method, the stream player can recognize the device state ZegoRemoteDeviceStateDisable through the onRemoteCameraStateUpdate callback. When the stream publisher stops publishing video streams with the mutePublishStreamVideo method, the stream player can recognize the device state ZegoRemoteDeviceStateMute through the onRemoteCameraStateUpdate callback.

FAQs

  1. How to use the ZegoVideoBufferTypeGLTexture2D data type to transfer the captured video data?

Set the bufferType attribute of the ZegoCustomVideoCaptureConfig object to ZegoVideoBufferTypeGLTexture2D, and then call the sendCustomVideoCaptureTextureData method to send the captured video frame.

  1. With custom video capture enabled, the local preview is working fine, but the remote viewers see distorted video images. How to solve the problem?

That is because the aspect ratio of the captured video is different from the aspect ratio of the SDK's default encoding resolution. For instance, if the aspect ratio of the captured video is 4:3, but the aspect ratio of the SDK's default encoding resolution is 16:9, you can solve the problem using either one of the following solutions:

  • Option 1: Change the video capture aspect ratio to 16:9.
  • Option 2: Call setVideoConfig to set the SDK's video encoding resolution to a resolution aspect ratio 4:3.
  • Option 3:Call setCustomVideoCaptureFillMode to set the video fill mode to ZegoViewModeAspectFit (the video will have black padding areas) or "ZegoViewModeAspectFill" (part of the video image will be cropped out).
  1. After the custom video capture is enabled, the video playback frame rate is not the same as the video capture frame rate. How to solve the problem?

Call setVideoConfig to set the frame rate to be the same as the video capture frame rate (i.e., the frequency of calling sendCustomVideoCapturePixelBuffer or sendCustomVideoCaptureTextureData)

  1. Does the SDK process the received video frame data synchronously or asynchronously?

When the SDK receives the video frame data, it will first copy the data synchronously and then perform encoding and other operations asynchronously. The captured video frame data can be released once they are passed into the SDK.

  1. How to implement video rotation during the custom video capture?

During the Custom video capturing, when the device direction changes, you can switch the screen between portrait and landscape orientation by the following two methods:

  • Process the video frame data by yourself: In the callback for device direction changing, rotate the captured video frame data, and then pass the processed data to the SDK by calling the sendCustomVideoCapturePixelBuffer method.
  • Process the video frame data with the SDK: In the callback for device direction changing, set the rotation property of the ZegoVideoEncodedFrameParam method based on the actual situation. And then call the sendCustomVideoCaptureEncodedData method with the video frame data and rotation parameters, and send the processed data to the SDK.
Page Directory