Build WebAR Experiences with A‑Frame, MindAR, and WebXR: From 3DoF to 6DoF
This tutorial walks you through creating WebAR and WebVR scenes using A‑frame, Three.js, and MindAR, covering 3DoF basics, adding 6DoF models, handling AR events, and even integrating the workflow into a React application.
3DoF and 6DoF
3DoF (Three Degrees of Freedom) allows rotation only, while 6DoF (Six Degrees of Freedom) adds positional movement. Starting with simple rotation, we first build a 3D box using A‑frame.
<!DOCTYPE html>
<html>
<head>
<script src="https://aframe.io/releases/1.3.0/aframe.min.js"></script>
</head>
<body>
<a-scene>
<a-box position="0 0 -5" rotation="0 0 0" color="#d4380d"></a-box>
<a-sky color="#1890ff"></a-sky>
</a-scene>
</body>
</html>The box’s position uses x, y, z coordinates (right, up, outward) and rotation follows the same axes.
<!DOCTYPE html>
<html>
<head>
<script src="https://aframe.io/releases/1.3.0/aframe.min.js"></script>
</head>
<body>
<a-scene>
<a-box position="0 0 -5" rotation="0 -30 0" color="#eb2f96"></a-box>
<a-sky color="#1890ff"></a-sky>
</a-scene>
</body>
</html>From 3D to 6D
To enrich the scene we load a GLB model via a-assets and place it alongside the box.
<a-assets>
<a-asset-item id="glass" src="./model.glb"></a-asset-item>
</a-assets> <a-entity position="0 1.5 -4" scale="5.0 5.0 5.0" gltf-model="#glass"></a-entity>AR First Step – Putting Glasses on a Face
Mounting Glasses on the Face
A‑frame can also serve as the foundation for AR when combined with MindAR.
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<script src="https://cdn.jsdelivr.net/gh/hiukim/[email protected]/dist/mindar-face.prod.js"></script>
<script src="https://aframe.io/releases/1.2.0/aframe.min.js"></script>
<script src="https://cdn.jsdelivr.net/gh/hiukim/[email protected]/dist/mindar-face-aframe.prod.js"></script>
<style>body{margin:0;}</style>
</head>
<body>
<div class="example-container">
<a-scene mindar-face embedded color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
<a-assets>
<a-asset-item id="headModel" src="https://cdn.jsdelivr.net/gh/hiukim/[email protected]/examples/face-tracking/assets/sparkar/headOccluder.glb"></a-asset-item>
<a-asset-item id="glassModel" src="./model.glb"></a-asset-item>
</a-assets>
<a-camera active="false" position="0 0 0"></a-camera>
<a-entity mindar-face-target="anchorIndex: 168">
<a-gltf-model mindar-face-occluder position="0 -0.3 0.15" rotation="0 0 0" scale="0.06 0.06 0.06" src="#headModel"></a-gltf-model>
</a-entity>
<a-entity mindar-face-target="anchorIndex: 10">
<a-gltf-model rotation="0 -0 0" position="0 -0.5 -0.6" scale="5.8 5.8 5.8" src="#glassModel" visible="true"></a-gltf-model>
</a-entity>
</a-scene>
</div>
</body>
</html>The mindar-face-occluder attribute creates a head‑mask model so the glasses appear correctly behind the face.
Event Handling
Listen for AR lifecycle events such as arReady, arError, targetFound, and targetLost:
document.addEventListener("DOMContentLoaded", () => {
const scene = document.querySelector('a-scene');
const arSystem = scene.systems['mindar-face-system'];
scene.addEventListener("arReady", () => {
alert('AR system loaded successfully!');
});
});Underlying Technologies
Mind‑AR relies on WebAssembly (wasm), SIMD, and WebGL2 (or WebGPU) to achieve real‑time performance.
Using React with Mind‑AR
import React, { useState } from 'react';
import 'mind-ar/dist/mindar-image.prod.js';
import 'aframe';
import 'mind-ar/dist/mindar-image-aframe.prod.js';
import './App.css';
import MindARViewer from './mindar-viewer';
function App() {
const [started, setStarted] = useState(false);
return (
<div className="App">
<h1>Example React component with <a href="https://github.com/hiukim/mind-ar-js" target="_blank">MindAR</a></h1>
<div>
{!started && <button onClick={() => setStarted(true)}>Start</button>}
{started && <button onClick={() => setStarted(false)}>Stop</button>}
</div>
{started && (
<div className="container">
<MindARViewer />
<video></video>
</div>
)}
</div>
);
}
export default App;Conclusion
Web AR development involves three core steps: image/object detection with deep‑learning models (e.g., TensorFlow.js), 3D modeling of the tracked objects, and compositing the models using libraries such as Three.js, Babylon.js, or A‑frame. Optimizing these pipelines for mobile devices requires careful tuning of WebGL/WebGPU, wasm, and SIMD, and future work will integrate digital‑twin concepts for data‑driven, intelligent interactions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
