Implementing Shift-and-Blend Video Effects with OpenGL Shaders
In this article, Tencent engineer Chang Qing explains the biology‑inspired “Shift‑and‑Blend” video effect—separating RGB channels, scaling and translating each, then alpha‑blending them—while detailing a normalized coordinate system, the required scaling/translation operations, and a high‑performance OpenGL fragment shader implementation for real‑time video processing, with links to demo and source code.
Author introduction: Chang Qing, a Tencent employee since 2008, works on client development and currently leads audio‑video terminal solutions in the Tencent Video Cloud team, covering interactive live streaming, on‑demand, short video, real‑time video calls, image processing, AI, etc.
Starting from the evolution of eyes in the Cambrian period, the article narrates how photoreceptor cells aggregated into a spherical eye structure, the role of the lens, and the subsequent arms race for color discrimination.
Human vision relies on three types of cone cells, which correspond to the RGB primary colors. Modern LCD displays and short‑video effects exploit this RGB color space.
The article then describes a visual effect called “Shift‑and‑Blend” used in short‑video processing. The effect is created by removing individual color channels (red, green, blue) from a static image, scaling each result by 10 % and translating it, and finally compositing the three images with 33 % opacity to produce a ghost‑like overlay.
To apply the same effect to video frames, the author defines a normalized coordinate system where the image center is (0,0) and the axes range from –1 to 1.
Four basic operations are introduced:
Scaling: (x',y') = s(x,y) = (sx, sy)
Translation: (x',y') = (x+?x, y+?y)
Combination: (x',y') = (s(x+?x), s(y+?y))
Alpha blending: src * (1 - alpha) + overlay * alpha
Because processing dozens of frames per second requires high performance, the implementation uses OpenGL for hardware acceleration, which is widely supported on both desktop and mobile platforms.
The core of the effect is a fragment shader written in GLSL. The shader computes an offset, applies a scale factor, adjusts the texture coordinates, and samples the input texture to produce the transformed pixel color.
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main() {
vec2 offset = vec2(0.05, 0.05);
float scale = 1.1;
vec2 coordinate;
coordinate.x = (textureCoordinate.x - offset.x) / scale;
coordinate.y = (textureCoordinate.y - offset.y) / scale;
gl_FragColor = texture2D(inputImageTexture, coordinate);
}The article explains the shader line by line: the coordinate variable represents the transformed (x,y) position, texture2D fetches the color at that coordinate, and gl_FragColor outputs the final RGBA color.
For video, additional tricks such as blending the current frame with the previous frame can create richer effects. The author provides links to a full demo and open‑source code for the short‑video SDK (UGSV) on Tencent Cloud.
Tencent Cloud Developer
Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.