Digital Human Technology: Design, Production, and Future Directions
The article surveys digital‑human technology—covering visual form, motion and AI‑driven intelligence—its fast‑growing market, a three‑layer solution stack (hardware/software base, AI platform, application services), end‑to‑end avatar creation workflow, rendering and animation techniques, web‑deployment challenges, and future prospects such as deeper AI, XR/6G and metaverse integration.
This article, originally presented at the 16th D2 Front‑End Technology Forum, introduces the concept of digital humans and outlines their three core attributes: "Form" (visual appearance), "Motion" (behaviour and expression), and "Intelligence" (environment perception and interaction).
It surveys the current market, noting rapid growth in sectors such as e‑commerce, finance, film, and gaming, and describes three development stages: startup, growth, and platform phases.
The solution stack is divided into three layers:
Base layer – hardware (displays, sensors, chips) and software (modeling tools, rendering engines).
Platform layer – AI capabilities, production‑service platforms, and system integration.
Application layer – end‑user products and creative services.
The workflow for creating a digital avatar is detailed:
Model the base mesh and rig in Maya, store assets in OSS.
Create textures in Photoshop and publish to CDN.
Develop a custom glTF exporter plugin in Maya to output geometry, skeleton, and material data.
Adjust materials via a web‑based editor.
Import glTF into the EVA Figure engine and render with custom shaders.
Facial customization uses a combination of skeletal skinning and blend‑shape (morph target) deformation, enabling thousands of unique faces. Clothing is handled by sharing the same skeleton between body and garments, with mesh clipping to avoid penetration.
Rendering styles include Physically Based Rendering (PBR) for realistic results and Non‑Photorealistic Rendering (NPR) for stylized, cartoon‑like appearances.
Animation techniques cover skeletal animation, morph‑target animation, and motion capture. The article explains blending animation for smooth transitions and a director system that scripts complex sequences, such as virtual concerts.
Challenges specific to web deployment are discussed, such as the performance gap between WebGL (OpenGL ES 2.0) and native APIs (Vulkan, DirectX, Metal). Optimizations involve Serverless rendering, EVA Figure, Puppeteer, and emerging WebGPU/WASM technologies.
Future directions point to deeper AI integration, large‑scale data support, XR/6G, brain‑computer interfaces, and the broader metaverse.
DaTaobao Tech
Official account of DaTaobao Technology
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.