How to Use webman/openai for Asynchronous Large‑Model Calls and Feedback Classification
This guide shows how to install the asynchronous webman/openai client, make streaming and non‑streaming calls to the Tongyi Qianwen API, and classify user feedback in PHP with high concurrency using the Webman framework.
The webman/openai library provides a non‑blocking, asynchronous OpenAI client for PHP. When used with the long‑running Webman framework, it enables thousands of concurrent calls to large language model APIs, avoiding the blocking behavior of traditional PHP‑FPM.
Installation
Install the package via Composer: composer require webman/openai Reference URL: https://www.workerman.net/plugin/157
Streaming Response Example
The following controller sends a streaming request to the Tongyi Qianwen endpoint (
https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions) using Webman\Openai\Chat. The request includes an API key, model qwen-plus, and a system prompt.
<?php
declare(strict_types=1);
namespace app\home\v1\controller;
use support\Request;
use support\Response;
use Webman\Openai\Chat;
use Workerman\Protocols\Http\Chunk;
class ChatController
{
public function completions(Request $request): Response
{
$connection = $request->connection;
$chat = new Chat([
'apikey' => 'sk-xxxxxxxxxxxxxxxxxxxxx',
'api' => 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions',
]);
$chat->completions([
'model' => 'qwen-plus',
'stream' => true,
'messages'=> [
['role' => 'system', 'content' => 'You are a helpful assistant for classifying user feedback.'],
['role' => 'user', 'content' => '请介绍一下webman框架!'],
],
], [
'stream' => function ($data) use ($connection) {
$connection->send(new Chunk(json_encode($data, JSON_UNESCAPED_UNICODE) . "
"));
},
'complete' => function ($result, $response) use ($connection) {
if (isset($result["error"])) {
$connection->send(new Chunk(json_encode($result, JSON_UNESCAPED_UNICODE) . "
"));
}
$connection->send(new Chunk(''));
},
]);
return response()->withHeaders(["Transfer-Encoding" => "chunked"]);
}
}Each chunk contains a delta field with partial content, allowing the client to display the response incrementally.
Non‑Streaming Response Example
Set 'stream' => false to receive a single JSON object containing the full answer, token usage, and model information.
Batch Classification of User Feedback
This controller classifies multiple feedback strings into four categories (price too high, insufficient after‑sales support, poor user experience, other). It iterates over an array of feedback, sends a non‑streaming request for each, logs the full API response, extracts the classification result, and streams intermediate JSON results back to the client.
<?php
declare(strict_types=1);
namespace app\home\v1\controller;
use support\Log;
use support\Request;
use support\Response;
use Webman\Openai\Chat;
use Workerman\Protocols\Http\Chunk;
class ChatController
{
public function completions(Request $request): Response
{
$connection = $request->connection;
$chat = new Chat([
'apikey' => 'sk-xxxxxxxxxxxxxxx',
'api' => 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions',
]);
$userFeedbacks = [
"这个手机太贵了,对于我这样的普通消费者不太友好。",
"客服总是找不到,售后支持根本不给力。",
"使用起来卡顿,体验很不好。",
"包装有点损坏,不过可以接受。",
];
$results = [];
foreach ($userFeedbacks as $index => $feedback) {
$chat->completions([
'model' => 'qwen-plus',
'stream' => false,
'messages'=> [
['role' => 'system', 'content' => '你是一个用于分类用户反馈的助手。'],
['role' => 'user', 'content' => "请将以下用户反馈按原因分类:价格过高、售后支持不足、产品使用体验不佳、其他。反馈内容:{$feedback}
回答格式:分类结果:"],
],
'temperature' => 0,
], [
'complete' => function ($result, $response) use ($connection, $index, &$results, $feedback) {
Log::info("Response for feedback $index: " . json_encode($result, JSON_UNESCAPED_UNICODE));
if (isset($result['error'])) {
$results[$index] = "Error for feedback $index: " . $result['error']['message'];
} elseif (isset($result['choices'][0]['message']['content']) && strpos($result['choices'][0]['message']['content'], '分类结果:') === 0) {
$results[$index] = trim($result['choices'][0]['message']['content']);
} else {
$results[$index] = "Error for feedback $index: Empty or invalid response from API";
}
},
]);
$connection->send(new Chunk(json_encode([
'index' => $index,
'feedback' => $feedback,
'result' => $results[$index] ?? null,
], JSON_UNESCAPED_UNICODE) . "
"));
}
$connection->send(new Chunk(''));
Log::info('All feedbacks processed');
return response()->withHeaders(["Transfer-Encoding" => "chunked"]);
}
}The final streamed JSON chunks contain the original feedback, its index, and the classification result, e.g.
{"index":0,"feedback":"这个手机太贵了…","result":"分类结果:价格过高"}.
Key Takeaways
Using webman/openai enables non‑blocking API calls, supporting high concurrency in PHP services.
Streaming mode delivers partial results instantly, useful for large responses or real‑time UI updates.
Non‑streaming mode simplifies batch processing such as multi‑item classification.
Proper error handling and logging (via Log::info) are essential for production reliability.
Open Source Tech Hub
Sharing cutting-edge internet technologies and practical AI resources.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
