Unlock Qwen3: Powerful LLM Features and Zero‑Code Deployment on Alibaba Cloud

This article introduces Qwen3, the latest dense and MOE large language model with dual‑mode reasoning, enhanced inference, multilingual support, and strong agent capabilities, and explains how Alibaba Cloud's PAI‑Model Gallery enables zero‑code, one‑click deployment and enterprise‑grade usage.

Alibaba Cloud Big Data AI Platform
Alibaba Cloud Big Data AI Platform
Alibaba Cloud Big Data AI Platform
Unlock Qwen3: Powerful LLM Features and Zero‑Code Deployment on Alibaba Cloud

Model Overview

Qwen3 is the latest generation of the Qwen series large language models, offering dense and mixture‑of‑experts (MOE) variants. Trained extensively, it achieves breakthroughs in inference, instruction following, agent capabilities, and multilingual support.

Dual‑mode thinking : Seamlessly switch between a “thinking” mode for complex logical reasoning, mathematics, and coding, and a “non‑thinking” mode for efficient general conversation.

Enhanced reasoning : Superior performance in mathematics, code generation, and common‑sense logical reasoning compared with previous Qwen models.

Human‑preference alignment : Excels in creative writing, role‑play, multi‑turn dialogue, and instruction following, delivering natural and immersive conversations.

Agent capabilities : Precisely integrates external tools in both modes, leading open‑source models in complex agent‑based tasks.

Multilingual support : Supports over 100 languages and dialects with strong understanding, reasoning, instruction following, and generation.

PAI‑Model Gallery

The Model Gallery is a component of Alibaba Cloud’s AI platform PAI, aggregating high‑quality pretrained models from global open‑source communities across LLM, AIGC, CV, NLP, etc. Through PAI, users can deploy these models with zero code, covering training, deployment, and inference.

PAI‑Model Gallery now includes all open‑source Qwen3 models, offering enterprise‑grade deployment solutions.

Zero‑code one‑click deployment

Automatic cloud resource adaptation

Ready‑to‑use APIs

Full‑process operation and maintenance

Enterprise‑level security with data staying in‑domain

One‑Click Deployment Guide

Example using Qwen3‑8B (low inference cost for quick validation):

Find Qwen3‑8B in the Model Gallery or go directly to https://x.sm.cn/W5Qpfy .

Click “Deploy” on the model detail page; SGLang and vLLM high‑performance frameworks are supported. Choose compute resources and complete cloud deployment with a single click.

After deployment, retrieve the endpoint and token from the service page. Refer to the model’s documentation for calling methods.

Use the PAI‑EAS inference service to test the deployed model; the responses demonstrate strong chain‑of‑thought abilities.

Minimum hardware requirements and maximum token limits for different inference frameworks are shown in the table below.

Deployment specifications table
Deployment specifications table

Additional Model Support

Beyond the full Qwen3 series, PAI‑Model Gallery continuously provides rapid deployment, training, and evaluation for popular open‑source models such as DeepSeek‑R1 (optimized inference), DeepSeek‑R1 full version, and QwQ 32B.

Contact Us

Follow PAI‑Model Gallery for updates on SOTA models. For model requests, join the user group (DingTalk group 79680024618) or click “Read Original” to contact the team.

large language modelAlibaba CloudQwen3Zero‑Code Deployment
Alibaba Cloud Big Data AI Platform
Written by

Alibaba Cloud Big Data AI Platform

The Alibaba Cloud Big Data AI Platform builds on Alibaba’s leading cloud infrastructure, big‑data and AI engineering capabilities, scenario algorithms, and extensive industry experience to offer enterprises and developers a one‑stop, cloud‑native big‑data and AI capability suite. It boosts AI development efficiency, enables large‑scale AI deployment across industries, and drives business value.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.