Building a Flask Translation App with GLM-4-Flash API
This guide shows how to create a Flask web service that translates text using ZhipuAI's free, high‑speed GLM‑4‑Flash API, covering environment setup, API key configuration, asynchronous task handling, endpoint implementation, and retrieving results via a unique task ID.
On August 27, Zhipu released GLM-4-Flash, a free large‑model API for fast text generation, code assistance, translation, knowledge‑base Q&A, PPT assistance, mind‑map generation and more.
What is GLM-4-Flash
GLM-4-Flash is a high‑speed, free LLM API suitable for article creation, code debugging, translation, knowledge‑base Q&A, PPT assistance, mind‑map generation and more.
Typical usage scenarios
Developers can use it for data extraction (entity recognition, keyword extraction), literature reading assistance, script and model code generation, model optimization suggestions, etc. Below is a demo of a text‑to‑text translation service built with Flask.
Setup
1. Install Python 3.12.
2. Install required packages:
pip install Flask pip install zhipuai3. Obtain an API key from the ZhipuAI platform and set api_key = "your_key" in the code.
Running the Flask app
Execute:
python app.pyThe app starts at http://127.0.0.1:5000/ where users can submit text for translation.
Key code overview
Import libraries:
from flask import Flask, request, jsonify, render_template
from zhipuai import ZhipuAI
import threading
import uuidInitialize app and client:
app = Flask(__name__)
api_key = "" # replace with your ZhipuAI API key
zhipuai_client = ZhipuAI(api_key=api_key)Data structures for task management:
translation_results = {}
task_status = {}
lock = threading.Lock()Translation function:
def translate_text(prompt, source_lang, target_lang, user_id=None):
try:
languages = {'en': '英文', 'zh': '中文'}
response = zhipuai_client.chat.completions.create(
model="glm-4-plus",
messages=[...],
user_id=user_id,
top_p=0.7,
temperature=0.95,
max_tokens=1024,
tools=[{"type":"web_search","web_search":{"search_result":True}}],
stream=False,
)
if response and response.choices[0].message:
return response.choices[0].message.content
else:
print("Error: No translated text returned.")
return None
except Exception as e:
print(f"Error translating text: {e}")
return NoneRoutes:
@app.route('/')
def index():
return render_template('index.html') @app.route('/translate', methods=['POST'])
def translate():
data = request.json
prompt = data.get('prompt')
source_lang = data.get('source_lang', 'zh')
target_lang = data.get('target_lang', 'en')
user_id = data.get('user_id')
if not prompt:
return jsonify({"error": "No text provided"}), 400
task_id = str(uuid.uuid4())
with lock:
task_status[task_id] = "processing"
def handle_translation():
translated_text = translate_text(prompt, source_lang, target_lang, user_id)
with lock:
translation_results[task_id] = translated_text
task_status[task_id] = "completed"
threading.Thread(target=handle_translation).start()
return jsonify({"message": "Translation in progress", "task_id": task_id}), 202 @app.route('/get-translation/
', methods=['GET'])
def get_translation(task_id):
with lock:
status = task_status.get(task_id)
result = translation_results.get(task_id)
if status == "completed" and result:
return jsonify({"task_id": task_id, "status": status, "translated_text": result}), 200
elif status == "processing":
return jsonify({"task_id": task_id, "status": status, "message": "Translation is still in progress"}), 202
else:
return jsonify({"task_id": task_id, "status": "failed", "message": "Translation failed or task does not exist"}), 404Start the server:
if __name__ == '__main__':
app.run(debug=True)Users can access the home page, submit text, and retrieve translation results via the provided task ID.
In summary, GLM‑4‑Flash offers a free, high‑performance LLM API that can be quickly integrated into backend services such as this Flask translation demo.
Java Tech Enthusiast
Sharing computer programming language knowledge, focusing on Java fundamentals, data structures, related tools, Spring Cloud, IntelliJ IDEA... Book giveaways, red‑packet rewards and other perks await!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.