Real-time LEGO Brick Detection on Mobile Using p5.js and Roboflow
This tutorial explains how to build a mobile web app that captures video with the phone camera, loads a custom LEGO detection model via Roboflow, identifies selected brick types in real time, and visualizes the results on a p5.js canvas, including deployment steps and improvement suggestions.
Origin : The author describes a yearly LEGO‑building tradition that makes finding specific bricks tedious, inspiring a solution that uses a smartphone camera to automatically locate pieces.
Steps
Capture video from the phone camera and draw frames onto a canvas.
Load a LEGO‑specific object‑detection model.
Select the target brick type.
Run the model on each frame to obtain detection data.
Draw bounding boxes and labels on the canvas.
Deploy the app on a mobile device.
Solution Details
The implementation uses p5.js for canvas handling and video capture, and roboflow.js (built on TensorFlow.js) to load a pre‑trained LEGO detection model. The model threshold is lowered to 0.15 to increase recall.
/**
* 初始化函数,设置画布大小并配置视频捕获属性。
*/
function setup() {
canvas = createCanvas(640, 480);
scaleX = 640 / +canvas.canvas.width || 640;
scaleY = 480 / +canvas.canvas.height || 480;
let constraints = { video: { facingMode: "environment" } };
video = createCapture(constraints, videoReady);
video.size(640, 480);
video.hide();
} async function videoReady() {
console.log("videoReady");
model = await getModel();
model.configure({ threshold: 0.15 });
loadText.innerHTML = "modelReady";
console.log("modelReady");
processSelect();
detect();
}
async function getModel() {
return await roboflow
.auth({ publishable_key: "rf_lOpB8GQvE5Uwp6BAu66QfHyjPA13" })
.load({ model: "hex-lego", version: 3 });
} function processSelect() {
const { classes } = model.getMetadata();
classes.forEach(className => {
const option = document.createElement("option");
option.value = className;
option.text = className;
selectRef.appendChild(option);
});
} const detect = async () => {
if (!play || !model) {
console.log("model is not available");
timer = setTimeout(() => {
requestAnimationFrame(detect);
clearTimeout(timer);
}, 2000);
return;
}
detections = await model.detect(canvas.canvas);
console.log("detections", detections);
requestAnimationFrame(detect);
}; function draw() {
image(video, 0, 0);
for (let i = 0; i < detections.length; i++) {
const object = detections[i];
let { x, y, width, height } = object.bbox;
width *= scaleX;
height *= scaleY;
x = x * scaleX - width / 2;
y = y * scaleY - height / 2;
stroke(0, 0, 255);
if (object.class.includes(selectVal)) stroke(0, 255, 0);
strokeWeight(4);
noFill();
rect(Math.floor(x), Math.floor(y), Math.floor(width), Math.floor(height));
noStroke();
fill(255);
textSize(24 * scaleX);
text(object.class, Math.floor(x) + 10, Math.floor(y) + 24);
}
}Deployment
To run on Android, install Termux, add Python3, and start a simple HTTP server with python3 -m http.server 8000 , then open the page in the mobile browser.
Summary
Technology stack: p5.js for graphics, roboflow.js for model loading.
Core functions: camera capture, model loading, target selection, real‑time detection, result rendering, start/stop controls.
Deployment: instructions for running the app on a phone via Termux.
Improvement Directions
Performance: lower frame rate or use a lighter model for mobile devices.
User experience: add loading animations, prompts, and feedback for missed detections.
Offline support: bundle the model locally to work without network.
Model accuracy: fine‑tune with a custom LEGO dataset to reduce false positives.
Readers are invited to share additional ideas and improvements.
Rare Earth Juejin Tech Community
Juejin, a tech community that helps developers grow.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.