Flutter External Texture Rendering and Optimization
Flutter reduces CPU/GPU overhead in multi‑video calls by separating drawing from presentation, using a unified LayerTree and external textures that share OpenGL contexts between Flutter and native code, allowing direct Skia rendering of native pixel buffers on iOS and Android with significantly lower latency and memory use.
When implementing multi‑video calls in 2013, rendering each video stream directly to the screen caused severe CPU and GPU overhead. The solution was to separate drawing from presenting, build a unified LayerTree, and trigger screen updates only on VSync.
Flutter’s rendering pipeline consists of three main components: the LayerTree generated by the Dart runtime, the Skia graphics engine, and the Shell that handles platform‑specific tasks such as EAGLContext management and buffer presentation (glPresentRenderBuffer on iOS, glSwapBuffer on Android). After layout, the engine traverses the LayerTree, draws each leaf node with Skia, and finally presents the frame.
Because Flutter isolates UI code from native code, accessing high‑memory native images (camera frames, video frames, album photos) is difficult. The channel mechanism used for data transfer incurs large CPU and memory costs.
Flutter provides a special mechanism called external texture . A TextureLayer node in the LayerTree corresponds to a Flutter Texture widget. When the widget is created, native code supplies the image data, which Flutter then renders.
The iOS implementation follows three steps: (1) call external_texture::copyPixelBuffer to obtain a CVPixelBuffer ; (2) create an OpenGL texture from the pixel buffer via CVOpenGLESTextureCacheCreateTextureFromImage ; (3) wrap the OpenGL texture into a SkImage and draw it with Skia.
void IOSExternalTextureGL::Paint(SkCanvas& canvas, const SkRect& bounds) {
if (!cache_ref_) {
CVOpenGLESTextureCacheRef cache;
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL,
[EAGLContext currentContext], NULL, &cache);
if (err == noErr) {
cache_ref_.Reset(cache);
} else {
FXL_LOG(WARNING) << "Failed to create GLES texture cache: " << err;
return;
}
}
fml::CFRef<CVPixelBufferRef> bufferRef;
bufferRef.Reset([external_texture_ copyPixelBuffer]);
if (bufferRef != nullptr) {
CVOpenGLESTextureRef texture;
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(
kCFAllocatorDefault, cache_ref_, bufferRef, nullptr, GL_TEXTURE_2D, GL_RGBA,
static_cast<int>(CVPixelBufferGetWidth(bufferRef)),
static_cast<int>(CVPixelBufferGetHeight(bufferRef)), GL_BGRA, GL_UNSIGNED_BYTE, 0,
&texture);
texture_ref_.Reset(texture);
if (err != noErr) {
FXL_LOG(WARNING) << "Could not create texture from pixel buffer: " << err;
return;
}
}
if (!texture_ref_) {
return;
}
GrGLTextureInfo textureInfo = {CVOpenGLESTextureGetTarget(texture_ref_),
CVOpenGLESTextureGetName(texture_ref_), GL_RGBA8_OES};
GrBackendTexture backendTexture(bounds.width(), bounds.height(), GrMipMapped::kNo, textureInfo);
sk_sp<SkImage> image =
SkImage::MakeFromTexture(canvas.getGrContext(), backendTexture, kTopLeft_GrSurfaceOrigin,
kRGBA_8888_SkColorType, kPremul_SkAlphaType, nullptr);
if (image) {
canvas.drawImage(image, bounds.x(), bounds.y());
}
}The external_texture_ object is registered from the native side via:
void PlatformViewIOS::RegisterExternalTexture(int64_t texture_id, NSObject<FlutterTexture>* texture) {
RegisterTexture(std::make_shared<IOSExternalTextureGL>(texture_id, texture));
}To avoid the costly GPU‑CPU‑GPU copy, the two OpenGL contexts used by Flutter (GPU Runner and IO Runner) share a ShareGroup . Native code creates its own context with the same ShareGroup, allowing textures to be passed directly to Skia without intermediate copies. This reduces both CPU time (e.g., a 720p frame on Android drops from ~10 ms to < 5 ms) and memory usage.
Flutter runs four TaskRunners: GPU Runner (rendering), IO Runner (resource loading), Platform Runner (main thread, native‑engine interaction), and a fourth for miscellaneous tasks. The shared‑context design enables external textures to be used efficiently across these runners.
On Android the principle is identical, but the implementation uses SurfaceTexture and a shareContext . The following JNI snippet shows how the native EGL context is obtained and wrapped for Flutter:
static jobject GetContext(JNIEnv* env, jobject jcaller, jlong shell_holder) {
jclass eglcontextClassLocal = env->FindClass("android/opengl/EGLContext");
jmethodID eglcontextConstructor = env->GetMethodID(eglcontextClassLocal, "
", "(J)V");
void* cxt = ANDROID_SHELL_HOLDER->GetPlatformView()->GetContext();
if ((EGLContext)cxt == EGL_NO_CONTEXT) {
return env->NewObject(eglcontextClassLocal, eglcontextConstructor,
reinterpret_cast
(EGL_NO_CONTEXT));
}
return env->NewObject(eglcontextClassLocal, eglcontextConstructor,
reinterpret_cast
(cxt));
}Key recommendations: avoid OpenGL calls on the main thread, always set the current context before any GL operation, and never delete textures that belong to Flutter’s context. The article mainly uses iOS examples; Android follows the same flow with minor API differences.
Xianyu Technology
Official account of the Xianyu technology team
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.