Mobile Development 11 min read

How We Built a Cross‑Platform Hardware‑Accelerated Live‑Streaming SDK for WeChat Video Channels

This article details the design and implementation of a cross‑platform SDK that enables external hardware devices to stream live video on WeChat Video Channels, covering user authentication, network signaling, UI integration, audio‑video encoding, and hardware acceleration across Android, iOS, PC and embedded platforms.

WeChat Client Technology Team
WeChat Client Technology Team
WeChat Client Technology Team
How We Built a Cross‑Platform Hardware‑Accelerated Live‑Streaming SDK for WeChat Video Channels

Technical Background

WeChat Video Channels recently added support for external hardware devices to stream live video. To enable this, a cross‑platform SDK was built that receives audio‑video data from the device, pushes it to WeChat’s backend, and displays the stream in the live room.

Key Problems

WeChat user identity

Network signaling channel

UI display and interaction

Audio‑video encoding & streaming

The solution must run on Android, iOS, Windows, macOS and embedded platforms such as Raspberry Pi.

WeChat User Identity

The SDK integrates with WeChat’s Open Platform to obtain user identity via either OAuth login (suitable for mobile apps) or QR‑code login (for PC and embedded devices). The two flows are illustrated below.

A unified device‑authorization scheme was designed to cover all target devices.

Network Signaling

After user authentication, the SDK needs a reliable, secure channel to configure the live room, exchange comments, and notify the backend when the stream ends. The open‑source WeChat Mars library provides the underlying network service, while the iLink platform supplies a communication protocol and user‑authentication service.

UI Display and Interaction

The live‑room UI is implemented as an H5 page loaded in a WebView. A JavaScript bridge injects APIs that control the stream and handle interactive messages. Various implementation options (native, Flutter, React Native, H5) were evaluated; H5 was chosen for its low integration cost and ease of deployment.

Audio‑Video Encoding & Streaming

Raw camera and microphone data are encoded to reduce bandwidth before being pushed via the RTMP protocol. To avoid licensing issues, the SDK uses libyuv, openh264, fdk‑aac and libsrs‑rtmp, assembled in a pipeline with multi‑queue buffering and multi‑thread processing. Default parameters: 25 fps, 1280×720, H.264, 1.2 Mbps video; 44.1 kHz, stereo, 128 kbps AAC audio.

Performance on a Google Pixel C shows CPU usage rising from ~8 % (pre‑stream) to ~32 % (streaming) and memory consumption increasing from 234 MB to 258 MB.

Hardware Acceleration

To reduce CPU load, hardware encoders are used on Android and iOS when available; otherwise the software pipeline is used as a fallback. Benchmarks on Pixel C and iPhone XS Max confirm lower CPU usage with hardware encoding.

Final Architecture

The completed SDK consists of the following modules:

WeChat user identity: OAuth or QR‑code login

Network signaling: iLink‑network + iLink‑tdi

UI: H5 + JS bridge + JS API

Audio‑video encoding & streaming: libyuv + openh264 + fdk‑aac + libsrs‑rtmp + platform‑specific hardware encoding

Demo applications show the size increase after integrating the live‑streaming component (e.g., Android arm64 from 4.9 MB to 11 MB).

SDKVideo EncodingHardware Acceleration
WeChat Client Technology Team
Written by

WeChat Client Technology Team

Official account of the WeChat mobile client development team, sharing development experience, cutting‑edge tech, and little‑known stories across Android, iOS, macOS, Windows Phone, and Windows.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.