iOS OpenGL ES Lab: 2D Smart Danmu (Bullet Comment) – First Episode
This article introduces a practical iOS OpenGL ES tutorial that demonstrates how to implement a 2D smart danmu system by separating foreground and background video layers, modifying IJKPlayer rendering, adding custom shaders, enabling blending, and optionally using CoreImage for face detection.
The author presents the first part of an OpenGL ES laboratory series focused on iOS, aiming to create a 2D smart danmu (bullet comment) system that avoids covering people in video playback.
Although the article does not cover OpenGL ES fundamentals, it assumes readers have basic knowledge and provides a hands‑on demo that extends IJKPlayer with an additional OpenGL layer to render foreground and background separately.
Demo
The demo includes a Git patch for IJKPlayer modifications (not uploaded due to size) and a CoreImage‑based face‑recognition danmu project named QHVisionDemo . The repository URL is provided.
Architecture
A new view QHIJKSDLGLShowView is introduced to handle rendering for each layer, and IJKSDLGLView now holds two instances ( showFV for foreground and showBV for background). The rendering flow is split so that both views receive the same video overlay.
@interface QHIJKSDLGLShowView : UIView
@property (nonatomic, readonly) CGFloat fps;
@property (nonatomic) CGFloat scaleFactor;
@property (nonatomic, weak) id
protocol;
@end
@interface IJKSDLGLView : UIView
@property (nonatomic, strong) QHIJKSDLGLShowView *showFV; // foreground view
@property (nonatomic, strong) QHIJKSDLGLShowView *showBV; // background view
@endThe display: method now distributes the overlay to both foreground and background views before invoking the original internal display routine.
- (void)display:(SDL_VoutOverlay *)overlay {
if (![self setupGLOnce]) return;
if (![self tryLockGLActive]) {
if (0 == (_tryLockErrorCount % 100)) {
NSLog(@"IJKSDLGLView:display: unable to tryLock GL active: %d\n", _tryLockErrorCount);
}
_tryLockErrorCount++;
return;
}
_tryLockErrorCount = 0;
// dispatch to foreground and background
[self.showBV display:overlay];
[self.showFV display:overlay];
[self displayInternal:overlay];
[self unlockGLActive];
}Shader Modification
The core of the smart danmu lies in a custom fragment shader that checks whether a fragment’s texture coordinates fall inside a predefined rectangular region; if not, the fragment’s alpha is set to zero, making it transparent.
static const char g_shader_front[] = IJK_GLES_STRING(
precision highp float;
varying highp vec2 vv2_Texcoord;
uniform mat3 um3_ColorConversion;
uniform lowp sampler2D us2_SamplerX;
uniform lowp sampler2D us2_SamplerY;
uniform lowp sampler2D us2_SamplerZ;
void main() {
mediump float fx = vv2_Texcoord.x;
mediump float fy = vv2_Texcoord.y;
mediump float x[4] = {0.3, 0.6, 0.6, 0.3};
mediump float y[4] = {0.2, 0.2, 0.6, 0.6};
mediump float a = (x[1]-x[0])*(fy-y[0]) - (y[1]-y[0])*(fx-x[0]);
mediump float b = (x[2]-x[1])*(fy-y[1]) - (y[2]-y[1])*(fx-x[1]);
mediump float c = (x[3]-x[2])*(fy-y[2]) - (y[3]-y[2])*(fx-x[2]);
mediump float d = (x[0]-x[3])*(fy-y[3]) - (y[0]-y[3])*(fx-x[3]);
if ((a>=0.0 && b>=0.0 && c>=0.0 && d>=0.0) || (a<=0.0 && b<=0.0 && c<=0.0 && d<=0.0)) {
mediump vec3 yuv;
lowp vec3 rgb;
yuv.x = (texture2D(us2_SamplerX, vv2_Texcoord).r - (16.0/255.0));
yuv.y = (texture2D(us2_SamplerY, vv2_Texcoord).r - 0.5);
yuv.z = (texture2D(us2_SamplerZ, vv2_Texcoord).r - 0.5);
rgb = um3_ColorConversion * yuv;
gl_FragColor = vec4(rgb, 1);
} else {
gl_FragColor = vec4(1, 1, 1, 0);
}
}
);Blending is enabled with glEnable(GL_BLEND) and an appropriate blend function (e.g., GL_SRC_ALPHA / GL_ONE_MINUS_SRC_ALPHA ) so that the transparent regions become invisible.
glEnable(GL_BLEND); // enable blending
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // set blend functionDanmu Integration
The danmu view is inserted between the foreground and background OpenGL views, allowing existing danmu components to be used.
- (void)addDanmu:(UIView *)view {
[_glView insertSubview:view belowSubview:_glView.showFV];
}CoreImage Face Detection
A separate demo ( QHVisionDemo ) shows how to use CIDetector to locate faces in a static image, feed the coordinates to the shader, and achieve the same smart‑danmu effect.
// Convert image to CIImage
CIImage *faceImage = [CIImage imageWithCGImage:image.CGImage];
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:opts];
NSArray *features = [faceDetector featuresInImage:faceImage];
if (_bFront) {
GLfloat va[_v_mesh_a.count];
for (int i = 0; i < _v_mesh_a.count; i++) {
va[i] = [_v_mesh_a[i] floatValue];
}
glUniform1fv(_v_mesh, (GLsizei)_v_mesh_a.count, va);
}The article concludes that while the current demo demonstrates basic danmu occlusion, full smart danmu would require robust face detection and segmentation, which are left as future work.
Links
Gitee profile: https://gitee.com/chenqihui
Full video danmu system tutorial: https://mp.weixin.qq.com/s/Y0L1d124V9tWoJA7hYNRMQ
QHAIDanmuMan repository: https://gitee.com/chenqihui/qhaidanmu-man
CoreImage face detection article: https://www.jianshu.com/p/15fad9efe5ba
Sohu Tech Products
A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.