Frontend Development 9 min read

How to Quickly Build 3D Virtual Avatars with Front‑End Techniques

This article explains the end‑to‑end front‑end workflow for creating real‑time 3D virtual avatars using the Oasis engine, covering glTF asset handling, model and texture replacement, bone animation mapping, inverse kinematics, physics integration, AI enhancements, and a low‑code development approach.

Alipay Experience Technology
Alipay Experience Technology
Alipay Experience Technology
How to Quickly Build 3D Virtual Avatars with Front‑End Techniques

Overall Process of Virtual Avatar

Artists model the avatar in a 3D tool, but front‑end developers must intervene early to ensure mobile rendering constraints such as triangle count and to define naming conventions for bones, skins, and materials. The artist exports a glTF file, which becomes the basis for all front‑end work.

Oasis Engine Core Capabilities

The Oasis engine provides fundamental features for virtual humans, including morph and skinning animation, material rendering, and a resource loader that easily imports glTF files and decomposes them into components and entities.

Asset Replacement Techniques

Model Replacement

Fixed‑shape assets like eyes or equipment are placed under a designated

Head

node in the glTF hierarchy, allowing artists to model them relative to the head origin.

Texture Replacement

Non‑geometric parts such as beards or blush are swapped by overlaying new texture maps onto predefined regions of the facial texture.

Skin Replacement

Skins are linked to multiple bones; by adhering to naming conventions, developers can locate and replace skin meshes programmatically.

Bone Animation Mapping

To avoid distortion when applying a single animation set to avatars of varying body shapes, bone mapping discards translation and scale data, preserving only rotation so the animation fits any proportion.

Inverse Kinematics and Physics

Inverse kinematics computes joint poses from end‑effector targets (e.g., feet, hands) to adapt avatars to ground contact or focus points. Simple cases use analytical solutions; complex rigs employ iterative methods like FBIK.

Collision handling uses PhysX.js compiled to WebAssembly for basic collision detection, and Nvcloth.js for cloth simulation to prevent mesh interpenetration.

AI End‑to‑End Capabilities

AI technologies such as text‑to‑speech drive facial and skeletal animation, while motion capture data is remapped via bone mapping to avoid distortion on avatars with different body proportions.

Low‑Code Virtual Avatar Development

All aforementioned functionalities are encapsulated in the Ark SDK, which integrates with the Oasis editor plugin. Artists upload assets, perform face sculpting, scene assembly, material tuning, and animation sequencing, then export front‑end components, allowing developers to focus on business logic.

Complex animation logic is managed through an editor‑based animation orchestration UI, where animation layers and conditional scripts define when and how actions are triggered.

Future Outlook

With the rise of the metaverse and Web3, virtual human production cost reduction remains critical. Ongoing research targets animation, rendering, physics, and AR/VR integration to place avatars in real environments. The Oasis editor and Ark components will be open‑sourced early next year.

frontendlow-codereal-time renderingglTFOasis enginevirtual humans3D avatars
Alipay Experience Technology
Written by

Alipay Experience Technology

Exploring ultimate user experience and best engineering practices

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.