Frontend Development 25 min read

Building an AR Sandbox for the Bund Conference with Ant's Custom 3D Marker

This article details how Ant Security's frontend team developed an AR sandbox for the Bund Conference using a self‑crafted 3D marker algorithm, covering framework selection, model conversion, coordinate handling, collision setup, lighting, animation control, and performance optimizations for iOS devices.

Alipay Experience Technology
Alipay Experience Technology
Alipay Experience Technology
Building an AR Sandbox for the Bund Conference with Ant's Custom 3D Marker
🙋🏻‍♀️ Editor's note: The author, a frontend engineer at Ant Group, introduces the frontend technologies behind the AR sandbox showcased at the Bund Conference. Unlike the previous "AR monster" project that used a ready‑made 3Dof algorithm, this sandbox employs a self‑developed 3D marker algorithm, with more models, a complex scene, rendering logic, and animation hierarchy.

1. Preface

1.1 What We Aim to Do

Ant Security has nine labs covering AI security, intelligent risk control, data protection, etc. The Bund Conference (Sept 7‑9, Shanghai) was an opportunity to showcase these labs. Using AR, we wanted participants to explore the labs via a physical sandbox printed in 3D, overlaid with virtual models accessible through an Alipay mini‑program.

Users scan the physical sandbox with AR, see a combined view of real and virtual models, and can tap virtual models for detailed information.

1.2 Who We Are

This project is a unified showcase of Ant Security labs at the Bund Conference, handled by the Ant Security Frontend (ASF) team. We provide frontend and experience technology support for the entire security division, focusing on productizing security capabilities with a warm user experience.

2. Implementation Process

2.1 Framework

The AR experience runs inside a mini‑program, integrating the Galacean AR framework which provides camera video streaming and virtual rendering capabilities.

2.2 Object Recognition

The mini‑program AR framework includes 6DoF, 3DoF, and gravity‑aware 3DoF tracking. However, these methods cannot locate objects relative to the physical sandbox, which is required for virtual‑real integration.

We therefore use a custom 3D marker algorithm from the vision team, which detects the spatial coordinates of the physical sandbox and continuously adjusts the virtual objects to achieve seamless AR overlay.

2.3 Model Handling

The external partner Light Cloud supplied the 3D models, but they were in a proprietary format, not the glTF format supported by Galacean. We converted them to glTF and kept each model under the 5 MB size limit required by the Alipay super‑app.

2.4 Functionality and Interaction Flow

The flow includes an entry page with Lottie animation that checks device compatibility, camera permission, client version, and pre‑loads algorithm bundles and glTF resources.

The game page initializes the Galacean scene, the 3D marker algorithm, renders glTF resources, and controls animations.

2.4.1 Algorithm Integration

We created a

Detecting

class to receive position data from the algorithm and a

PointCloud

class to render the points as a cloud in space.

After making the point cloud transparent, we attach the lab models and secondary panels as children of the point cloud to achieve sandbox positioning.

2.4.2 Lab Model Display Logic

Initially we planned a 1‑9 relationship where the centered lab plays animation while others stay static. When the camera zooms out, all nine labs animate. However, the 3D marker algorithm returns a single coordinate for the whole sandbox, so we must treat the nine labs as a single entity, disabling independent control.

iOS devices suffer performance issues because all calculations run in JavaScript, and iOS restricts JIT, reducing JS performance to about 30 % of Android. To mitigate this, we pause lab animations on iOS, making the 1‑9 animation scheme unsuitable.

2.4.3 Lab Model Positioning

We considered two placement strategies. Strategy 1 adjusts each model's position after rendering; Strategy 2 provides each model already positioned relative to the sandbox origin. Strategy 2 reduces cost and debugging effort, so we chose it.

Galacean uses a right‑handed coordinate system, while Unity (the source models) uses left‑handed. The camera looks toward –Z.

Unity: left‑handed, clockwise rotation.

Galacean: right‑handed, counter‑clockwise rotation.

2.4.4 Secondary Panel Presentation

2.4.4.1 Display Scheme

Two options were evaluated: (1) each lab shows its panel above the model, allowing multiple panels simultaneously (causing visual clutter and performance issues); (2) a fixed‑position panel where only one appears at a time. We adopted option 2 and hide the lab model when its panel is shown to focus user attention.

2.4.4.2 Interaction Scheme

To close a panel, users can tap anywhere on the screen because the hidden lab model has no visible collider. We listen to

onTouchEnd

on a global view and call

hideWin

to hide the panel.

<code>&lt;canvas disable-scroll="true" onReady="onCanvasReady" class="canvas" id="canvas" type="webgl" /&gt;
&lt;view disable-scroll="true" class="full" onTouchStart="onTouchStart" onTouchMove="onTouchMove" onTouchEnd="onTouchEnd" onTouchCancel="onTouchCancel"&gt;
  <!-- index.ts -->
  // ...other code
  onTouchEnd(e) {
    dispatchPointerUp(e);
    dispatchPointerLeave(e);
    GameCtrl.ins.hideWin();
  }
&lt;/view&gt;

&lt;!-- GameCtrl.ts --&gt;
&lt;script&gt;
  // ...other code
  /**
   * Click screen to hide secondary panel
   */
  hideWin() {
    // Only hide when lab model is not displayed
    if (!shapanRoot.isActive) {
      let activeWin;
      // Find currently active panel
      for (const key in winList) {
        if (winList[key].root.isActive) {
          activeWin = winList[key];
        }
      }
      // Play panel hide animation
      const animator = activeWin.root.getComponent(Animator);
      const animatorName = activeWin.animations![0].name;
      const state = animator.findAnimatorState(animatorName);
      activeWin.isPlayLoop = false;
      animator.speed = -1;
      state.wrapMode = 0;
      state.clipStartTime = 0;
      state.clipEndTime = 0.46;
      animator.play(animatorName);
    }
  }
&lt;/script&gt;
</code>

2.5 Collider Usage

Because AR interactions occur in a canvas, we need colliders attached to entities to capture click events via

onPointClick

. Colliders require the Galacean physics engine; we use

LitePhysics

from

@galacean/engine-physics-lite

(v1.0.0‑beta.14).

<code>MiniXREngine.create({
  canvas,
  XRDelegate: AntAR,
  physics: new LitePhysics(),
});
</code>

The interaction chain is:

Tap a lab to show its secondary panel and hide all labs.

Tap again to hide the panel and show the labs.

We group the nine labs under a common parent entity for collective show/hide control, while keeping colliders on a separate entity so they remain active when labs are hidden.

<code>// Collider creation example
const cubeSize = 0.26;
const cubeEntity = collRoot.createChild('cube');
// Position cube according to lab index
switch (String(i)) {
  case '0': cubeEntity.transform.setPosition(-0.65, 0.7, 0.5); break;
  case '1': cubeEntity.transform.setPosition(-0.65, 0.7, 0); break;
  case '2': cubeEntity.transform.setPosition(-0.65, 0.7, -0.5); break;
  case '3': cubeEntity.transform.setPosition(0, 0.7, 0.5); break;
  case '4': cubeEntity.transform.setPosition(0, 0.7, 0); break;
  case '5': cubeEntity.transform.setPosition(0, 0.7, -0.5); break;
  case '6': cubeEntity.transform.setPosition(0.65, 0.7, 0.5); break;
  case '7': cubeEntity.transform.setPosition(0.65, 0.7, 0); break;
  case '8': cubeEntity.transform.setPosition(0.65, 0.7, -0.5); break;
  default: break;
}
cubeEntity.transform.setScale(cubeSize, cubeSize, cubeSize);

const colliderSize = 1;
const boxColliderShape = new BoxColliderShape();
boxColliderShape.size.set(colliderSize, colliderSize, colliderSize);
const boxCollider = cubeEntity.addComponent(StaticCollider);
boxCollider.addShape(boxColliderShape);
</code>

2.6 Model Rendering

2.6.1 Model Production Requirements

The external partner's models sometimes include the

KHR_draco_mesh_compression

extension, which triggers a worker that is unsupported in mini‑programs, causing load failures.

2.6.2 Adding Lighting

We added environment lighting and a directional light to reduce model flickering.

<code>// Ambient light
engine.resourceManager
  .load<AmbientLight>({
    type: AssetType.Env,
    url: 'https://mdn.alipayobjects.com/portal_x4occ0/afts/file/A*34_0RLzSf2AAAAAAAAAAAAAAAQAAAQ/light.env'
  })
  .then((ambientLight) => {
    scene.ambientLight = ambientLight;
  });

// Directional light
const lightEntity = root.createChild('light');
const directLight = lightEntity.addComponent(DirectLight);
directLight.color.set(1, 1, 1, 1);
lightEntity.transform.setRotation(-45, -45, 0);
</code>

2.6.3 Model Animation

The nine labs require animation phases: fade‑in, loop, and fade‑out. The provider delivered a single animation containing all phases; we control playback speed and clip times to achieve the three effects.

<code>let isClicked = false;
cubeEntity.addComponent(Script).onPointerClick = () => {
  if (isClicked) return;
  isClicked = true;
  let activeWin, animatorName, animator, state, operaWin;
  for (const key in winList) {
    if (winList[key].root.isActive) {
      activeWin = winList[key];
    }
  }
  if (activeWin) {
    operaWin = activeWin;
    animatorName = activeWin.animations![0].name;
    animator = operaWin.root.getComponent(Animator);
    state = animator.findAnimatorState(animatorName);
  } else {
    operaWin = winList[`win${i}`];
    animatorName = winList[`win${i}`].animations![0].name;
    animator = operaWin.root.getComponent(Animator);
    state = animator.findAnimatorState(animatorName);
  }
  // State machine script to handle panel show/hide
  state.addStateMachineScript(class extends StateMachineScript {
    onStateExit(winanimator, animatorState, layerIndex) {
      if (operaWin.isPlayLoop) {
        animatorState.wrapMode = 1;
        animatorState.clipStartTime = 0.46;
        animatorState.clipEndTime = 1;
        winanimator.speed = 1;
        winanimator.play(winList[`win${i}`].animations![0].name);
      } else {
        winanimator.speed = 1;
        shapanRoot.isActive = true;
        operaWin.root.isActive = false;
      }
    }
  });

  if (root.isActive) {
    // Show panel, hide sandbox
    operaWin.root.isActive = true;
    root.isActive = false;
    operaWin.isPlayLoop = true;
    state.wrapMode = 0;
    state.clipStartTime = 0;
    state.clipEndTime = 0.46;
    animator.speed = 1;
    animator.play(animatorName);
    isClicked = false;
  } else {
    // Hide panel, show sandbox
    operaWin.isPlayLoop = false;
    animator.speed = -1;
    state.wrapMode = 0;
    state.clipStartTime = 0;
    state.clipEndTime = 0.46;
    animator.play(animatorName);
    isClicked = false;
  }
};
</code>

2.6.4 Occlusion Effect

To achieve realistic occlusion between virtual objects and the physical sandbox, we create a transparent model of the sandbox that writes no color but participates in depth testing.

<code>engine.resourceManager
  .load<GLTFResource>({
    url: 'https://mdn.alipayobjects.com/portal_x4occ0/afts/file/A*xN9wQbbbcZIAAAAAAAAAAAAAAQAAAQ/sp.glb',
    type: AssetType.GLTF,
  })
  .then((asset) => {
    const { defaultSceneRoot, materials } = asset;
    defaultSceneRoot.transform.setPosition(0, -0.45, 0);
    defaultSceneRoot.transform.setRotation(90, 0, 0);
    defaultSceneRoot.transform.setScale(1, 1, 1);
    spRoot.addChild(defaultSceneRoot);
    const meshRenderers = [];
    defaultSceneRoot.getComponentsIncludeChildren(MeshRenderer, meshRenderers);
    materials.forEach((material) => {
      material.renderState.blendState.targetBlendState.colorWriteMask = ColorWriteMask.None;
    });
    meshRenderers.forEach((meshRenderer) => {
      meshRenderer.priority = -999;
    });
  });
</code>

3. Conclusion

Through close collaboration among the algorithm team, AR team, and external partners, the AR sandbox was completed and deployed before the Bund Conference.

The sandbox represents Ant Security's second AR venture after the "AR monster" project, this time using a self‑developed 3D marker algorithm and handling more complex models, scenes, rendering logic, and animation hierarchies.

The experience has provided valuable insights for future interactive rendering projects.

FrontendanimationARmini-programCollisionGalacean3D marker
Alipay Experience Technology
Written by

Alipay Experience Technology

Exploring ultimate user experience and best engineering practices

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.