Can AI Generate Real‑Time, Editable Motion Graphics? Inside Neon Vibe Motion

This article examines Neon Vibe Motion, an open‑source platform that lets users describe motion effects in natural language, uses LLMs to generate executable Canvas/WebGL code with adjustable parameters, and details the architecture, workflow, prompt engineering, and export options that enable real‑time, controllable motion graphics.

Bilibili Tech
Bilibili Tech
Bilibili Tech
Can AI Generate Real‑Time, Editable Motion Graphics? Inside Neon Vibe Motion

Background

The B‑station multimedia middle‑platform uses a special‑effects SDK for clipping, shooting, and live‑streaming. Traditional pipelines split into three tracks, each sacrificing either richness, interactivity, or production efficiency.

Motivation

Neon Vibe Motion is an open‑source platform that lets users describe a motion effect in natural language, generates executable rendering code with tunable parameters, previews it on an HTML5 Canvas, and exports the result as MP4 video, a self‑contained HTML page, or a .neon session archive. It works with any LLM that implements the OpenAI Chat Completions API.

Related Work

AI‑driven motion generation falls into two categories:

Pixel generation – text‑to‑video models (e.g., Sora, Veo, Seedance) output immutable pixel videos.

Code generation – LLMs emit executable render code that can be edited and parameterised; Neon follows this path.

Commercial examples include Higgsfield Vibe Motion (Claude‑based) and the open‑source Remotion framework (React‑based), both of which generate code but differ in architecture.

Paradigm Choice: Structured Code vs. Video

Instead of outputting a video file, Neon generates executable rendering code in one of three material package forms:

Declarative JSON config – limited to pre‑defined SDK primitives.

Engine script + custom shader – powerful but requires deep engine knowledge.

Executable Canvas 2D code – Neon’s default; offers flexibility without engine constraints.

Canvas 2D was chosen because LLMs have a higher success rate writing Canvas APIs, thanks to abundant training data.

Rendering Engine Evolution

Neon starts with Canvas 2D driven by requestAnimationFrame for flat effects, data visualisations, and particle systems. To overcome visual limits (e.g., bloom, motion blur), a WebGL post‑process layer composites the Canvas output as a texture and runs full‑screen shaders.

Future work may add Three.js for 3D effects, although current LLM‑generated 3D code quality is still low.

Making LLM‑Generated Code Usable

Single LLM calls produce runnable code ~90 % of the time, but only ~20 % meet visual expectations. Neon adds three quality‑control layers:

Clarification – the model asks 1‑5 targeted questions (e.g., main colour, animation speed) before generating code.

Automatic error fixing – runtime exceptions are sent back to the LLM for up to three repair attempts.

Frame‑time detection – if a frame exceeds 100 ms, rendering pauses and the user can adjust parameters.

Prompt Engineering for Motion

Vibe Motion prompts must describe motion rules (emission source, velocity decay, random perturbation, forces, etc.) rather than static appearance. This enables the LLM to emit formulas such as: velocity = initialSpeed * Math.pow(damping, t) These variables become native adjustable parameters in the MotionDefinition.

Parameter System & Real‑Time Control

The LLM declares a parameters array (name, type, default, range). The front‑end auto‑generates UI controls; changes are injected into the render function each frame. Duration can be a static value or a durationCode expression evaluated at runtime (e.g.,

Math.max(params.video1.videoDuration, params.video2.videoDuration)

).

Export Pipeline

Neon supports three export formats:

MP4 video – browser‑side encoding without server upload.

Zero‑dependency HTML – contains code, resources, and a parameter panel; instantly runnable. .neon session archive – JSON bundle preserving dialogue, definition, and parameter snapshots.

Motion Replication (Video → Code)

Two pipelines were explored:

Fully automatic – a vision‑language model analyses video frames and iteratively refines code, but unstable analysis leads to wasted iterations.

User‑involved staged workflow – key‑frame extraction, structured VLM analysis, user confirmation, then targeted code generation with iterative optimisation. This approach yields good replication for simple particle effects; complex animations remain challenging.

Neon Skill for Coding Agents

Neon’s capabilities are packaged as a Claude Code Skill . System prompts are split into modular documents (e.g., SKILL.md, CANVAS‑GUIDE.md, PARAMETERS.md) to reduce context overload for agents.

CLI Tooling

A Node.js CLI bundles the renderer, injects a MotionDefinition, drives headless Chrome via Playwright, and outputs MP4. This enables batch processing and integration into CI pipelines.

Core Insight

The central abstraction is MotionDefinition: natural language → motion rules → executable code → tunable parameters. Rendering back‑ends (Canvas, WebGL, Three.js) and export formats are interchangeable layers.

Future Work

Planned directions include making the effect SDK more AI‑friendly, expanding the rendering stack (e.g., richer WebGL post‑processes, stable Three.js integration), and simplifying workflows as LLM capabilities improve.

Repository

GitHub repository:

https://github.com/S1mpleSonny/neon-vibe-motion
prompt engineeringreal-time renderingWebGLCanvas 2DAI motion graphicsLLM code generationNeon Vibe Motion
Bilibili Tech
Written by

Bilibili Tech

Provides introductions and tutorials on Bilibili-related technologies.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.