Integrate Alibaba Tongyi Qianwen with Webman OpenAI Plugin in 3 Minutes

This guide walks developers through registering for Alibaba Cloud's Tongyi Qianwen large‑language model, creating a DashScope API‑KEY, and using the OpenAI‑compatible interface with the Webman/openai PHP plugin and a simple JavaScript client to stream responses.

Open Source Tech Hub
Open Source Tech Hub
Open Source Tech Hub
Integrate Alibaba Tongyi Qianwen with Webman OpenAI Plugin in 3 Minutes

Overview

Tongyi Qianwen is Alibaba Cloud’s ultra‑large language model launched publicly on September 13, 2023. It belongs to the AIGC (AI Generated Content) domain and is offered as a Model‑as‑a‑Service (MaaS) platform.

Official site: https://tongyi.aliyun.com/qianwen/bag/home

Registration and Activation

Log in to Alibaba Cloud, enable the Lingji model service, and open the DashScope console at https://dashscope.console.aliyun.com/overview.

Create an API‑KEY

The API‑KEY is the credential for accessing DashScope. It is tied to the Alibaba Cloud account (or an authorized RAM sub‑account) and is shared across all users of that account. Manage the key via the DashScope console; any modification affects the entire account.

Using the Service

DashScope provides an OpenAI‑compatible endpoint, allowing developers to call the model with standard OpenAI APIs or SDKs after configuring the DashScope API‑KEY and endpoint.

Documentation: https://help.aliyun.com/zh/dashscope/developer-reference/compatibility-of-openai-with-dashscope

Server‑Side Integration (Webman)

Install the asynchronous Webman OpenAI client plugin: composer require webman/openai The plugin webman/openai supports non‑blocking calls and can handle tens of thousands of concurrent requests.

Example controller ChatController.php (PHP 8):

<?php
declare(strict_types=1);

namespace app\controller;

use support\Request;
use support\Response;
use Webman\Openai\Chat;
use Workerman\Protocols\Http\Chunk;

class ChatController {
    public function home(Request $request): Response {
        return view('chat/home');
    }

    public function completions(Request $request): Response {
        $connection = $request->connection;
        $chat = new Chat([
            'apikey' => 'sk-xxxxxxxxxxxxxxxxxxxxxx',
            'api'    => 'https://dashscope.aliyuncs.com/compatible-mode'
        ]);
        $chat->completions([
            'model'    => 'qwen-turbo',
            'stream'   => true,
            'messages' => [[
                'role'    => 'user',
                'content' => '你是什么大模型?'
            ]]
        ], [
            'stream' => function($data) use ($connection) {
                $connection->send(new Chunk(json_encode($data, JSON_UNESCAPED_UNICODE) . "
"));
            },
            'complete' => function($result, $response) use ($connection) {
                var_dump($result);
                if (isset($result['error'])) {
                    $connection->send(new Chunk(json_encode($result, JSON_UNESCAPED_UNICODE) . "
"));
                }
                $connection->send(new Chunk(''));
            },
        ]);
        return response()->withHeaders([
            "Transfer-Encoding" => "chunked",
        ]);
    }
}

External service address for completions:

http://127.0.0.1:8787/chat/completions

Client‑Side (HTML + JavaScript)

View file chat/home.html streams the response and displays each chunk in a container.

<!DOCTYPE html>
<html>
<head>
    <meta charset="UTF-8">
    <title>OpenAI Async Client for Tongyi Qianwen</title>
</head>
<body>
    <h1>OpenAI Async Client Accessing Tongyi Qianwen Model – 3‑Minute AIGC Setup</h1>
    <div id="data-container"></div>
    <script>
        const url = 'http://127.0.0.1:8787/chat/completions';
        function fetchChunkedData() {
            fetch(url, {method: 'GET', headers: {'Accept': 'text/plain; chunks=true'}})
                .then(response => {
                    if (!response.ok) throw new Error('Network response was not ok ' + response.statusText);
                    return response.body;
                })
                .then(body => {
                    const reader = body.getReader();
                    processChunks(reader);
                })
                .catch(error => console.error('Fetch error:', error));
        }
        function processChunks(reader) {
            reader.read().then(({done, value}) => {
                if (done) { console.log('Stream complete'); return; }
                const chunk = new TextDecoder('utf-8').decode(value);
                document.getElementById('data-container').insertAdjacentHTML('beforeend', chunk);
                processChunks(reader);
            });
        }
        fetchChunkedData();
    </script>
</body>
</html>

External service address for the HTML page:

http://127.0.0.1:8787/chat/home

Running the Application

Start the Webman server: php start.php start Open the home page URL in a browser to see the streaming response. Example JSON chunk returned by the model:

{
    "choices": [{
        "delta": {"content": "我是", "role": "assistant"},
        "finish_reason": null,
        "index": 0,
        "logprobs": null
    }],
    "object": "chat.completion.chunk",
    "created": 1714803699,
    "system_fingerprint": "",
    "model": "qwen-turbo",
    "id": "chatcmpl-650738c41d0794079829207a6bdfbec6"
}
...
JavaScriptPHPWebmanDashScopeOpenAI compatible APITongyi Qianwen
Open Source Tech Hub
Written by

Open Source Tech Hub

Sharing cutting-edge internet technologies and practical AI resources.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.