Build a Zero‑Setup Face‑to‑Face Translator Mini‑Program with WeChat’s AI Plugin

This guide walks developers through adding WeChat’s free AI translation plugin to a mini‑program, covering plugin installation, voice input, real‑time transcription, text translation, and speech synthesis in five straightforward steps, complete with code snippets and configuration details.

WeChat Backend Team
WeChat Backend Team
WeChat Backend Team
Build a Zero‑Setup Face‑to‑Face Translator Mini‑Program with WeChat’s AI Plugin

WeChat recently released a free AI translation plugin for mini‑programs, enabling developers to add speech‑to‑text, text‑to‑speech, and real‑time translation with just a few API calls.

Step 1: Add the Plugin

Log in to the official site, go to Settings → Third‑Party Services → Add Plugin, search for “WeChat Speech‑to‑Text” and add it. Then specify the plugin in app.json:

{
  "plugins": {
    "WechatSI": {
      "version": "0.0.6",
      "provider": "wx069ba97219f66d99"
    }
  }
}

Import the plugin in index.js and obtain the global speech‑recognition manager:

const plugin = requirePlugin("WechatSI");
const manager = plugin.getRecordRecognitionManager();

Step 2: Voice Input

Bind a button to start and stop recording:

<view catchtouchstart="streamRecord" catchtouchend="endStreamRecord">中文</view>
Page({
  streamRecord() { manager.start({ lang: 'zh_CN' }); },
  streamRecordEnd() { manager.stop(); }
});

Step 3: Bind Recording Callbacks

Display real‑time transcription results:

<view>语音识别内容:{{currentText}}</view>
Page({
  data: { currentText: '' },
  initRecord() {
    manager.onRecognize = (res) => {
      this.setData({ currentText: res.result });
    };
    manager.onStop = (res) => {
      if (res.result) {
        this.setData({ currentText: res.result });
        this.translateTextAction();
      }
    };
  },
  onLoad() { this.initRecord(); }
});

Step 4: Text Translation

Translate the recognized text and play the synthesized voice:

Page({
  data: { currentText: '', translateText: '' },
  translateTextAction() {
    const lfrom = 'zh_CN';
    const lto = 'en_US';
    plugin.translate({
      lfrom,
      lto,
      content: this.data.currentText,
      tts: true,
      success: (resTrans) => {
        this.setData({ translateText: resTrans.result });
        wx.playBackgroundAudio({ dataUrl: resTrans.filename, title: '' });
      }
    });
  }
});

Step 5: Voice Synthesis

If the synthesized audio expires, you can re‑generate it with plugin.textToSpeech:

plugin.textToSpeech({
  lang: 'zh_CN',
  content: '我想重新进行语音合成',
  success: (res) => {
    // obtain new audio file and expiration time
  }
});

With these simple steps, developers can create a mini‑program that supports voice input, speech synthesis, and text translation, achieving a “zero‑setup” face‑to‑face translation experience.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

voice synthesismini-programWeChatSpeech RecognitionAI translation
WeChat Backend Team
Written by

WeChat Backend Team

Official account of the WeChat backend development team, sharing their experience in large-scale distributed system development.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.