Frontend Development 10 min read

Implementation of Driver Authentication Video Capture Using WebRTC and RecordRTC

The project implements a cross‑platform driver authentication video capture module by using WebRTC to access rear‑facing cameras, RecordRTC to record a five‑second clip with custom constraints, and uploading the resulting Blob to Alibaba Cloud OSS for OCR, ensuring consistent functionality across native apps, mini‑programs, and external H5 pages.

HelloTech
HelloTech
HelloTech
Implementation of Driver Authentication Video Capture Using WebRTC and RecordRTC

Project background: The driver authentication feature is an H5‑based module deployed across multiple platforms (Harbor App, driver App, freight driver App, Alipay mini‑program, WeChat mini‑program, external H5 pages). It requires video capture capability on all these platforms.

Key challenges include multi‑platform adaptation, mini‑program compatibility (Alipay provides native capture, WeChat lacks video capture), and browser compatibility for external H5 pages.

WebRTC overview: WebRTC (Web Real‑Time Communications) enables peer‑to‑peer audio, video and data streams directly between browsers without intermediate servers. Core APIs are getUserMedia, RTCPeerConnection and RTCDataChannel. RecordRTC is a JavaScript library that wraps getUserMedia to simplify media recording.

Implementation architecture: The solution combines WebRTC for media access and RecordRTC for recording, then uploads the resulting Blob to Alibaba Cloud OSS for storage and later OCR processing.

Step 1 – Installation

npm install recordrtc
import { RecordRTCPromisesHandler } from 'recordrtc';

Step 2 – Initialization

All rear‑facing cameras are enumerated via navigator.mediaDevices.enumerateDevices() . The first environment‑facing camera is selected and a set of video constraints (frameRate, width, height, facingMode, aspectRatio) is built. The method async getVideoConstraints() returns a MediaTrackConstraints object.

async getVideoConstraints() {
  let deviceId = '';
  if (!this.activeCamera) {
    const deviceList = await navigator.mediaDevices.enumerateDevices();
    const videoDeviceList = deviceList.filter(deviceInfo => deviceInfo.kind === 'videoinput').reverse();
    this.$emit('output-list', videoDeviceList);
    for (const device of videoDeviceList) {
      const stream = await navigator.mediaDevices.getUserMedia({
        video: { deviceId: device.deviceId },
        audio: false,
      });
      const isEnvironment = stream.getVideoTracks()[0].getSettings().facingMode === 'environment';
      stream.getTracks().forEach(track => track.stop());
      if (isEnvironment) {
        deviceId = device.deviceId;
        break;
      }
    }
  }
  const result = {
    frameRate: { ideal: 6, max: 10 },
    width: this.env.isAndroid ? { ideal: 960, min: 480, max: 960 } : { ideal: 480, min: 480, max: 960 },
    height: this.env.isAndroid ? { ideal: 1280, min: 640, max: 1280 } : { ideal: 640, min: 640, max: 1280 },
    facingMode: 'environment',
    deviceId: this.activeCamera ? this.activeCamera.deviceId : deviceId,
    aspectRatio: 3 / 4,
  };
  if (!deviceId && !this.activeCamera) {
    delete result.deviceId;
  }
  return result;
}

Step 3 – Recording

The component creates a RecordRTCPromisesHandler instance. Calling startRecording() begins capture, and a 5‑second countdown timer stops the recording. After stopping, getBlob() retrieves the video Blob.

async record() {
  if (this.recorder) {
    await this.recorder.startRecording();
    this.isRecording = true;
  }
}
// Countdown timer
startTimer() {
  if (this.timerText > 1) {
    this.recording = true;
    this.timerText -= 1;
    setTimeout(() => { this.startTimer(); }, 1000);
  } else {
    this.resetTimer();
  }
}
resetTimer() {
  if (this.$refs.videoRecorder) {
    this.$refs.videoRecorder.stop();
  }
  this.recording = false;
  this.btnImgUrl = btnImgUrlMapper.DEFALUT;
  this.timerText = 6;
}
async stop() {
  if (this.recorder) {
    await this.recorder.stopRecording();
    this.isRecording = false;
    this.uploadFile();
  }
}
async uploadFile() {
  const video = await this.recorder.getBlob();
  this.$emit('recorded', { video });
}

Step 4 – Upload

The recorded Blob is uploaded to Alibaba Cloud OSS. The returned file name is used to generate a preview URL, which is then passed to an OCR service for further processing.

The whole workflow enables a consistent video capture experience across native apps, mini‑programs and web pages, leveraging standard WebRTC APIs and the RecordRTC library.

Front-endH5JavaScriptWebRTCRecordRTCvideo capture
HelloTech
Written by

HelloTech

Official Hello technology account, sharing tech insights and developments.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.