How to Train, Evaluate, and Deploy Qwen2.5-Coder on Alibaba Cloud PAI‑QuickStart

This guide walks developers through the entire lifecycle of Qwen2.5‑Coder—covering model sizes, training token expansion, resource requirements, fine‑tuning with SFT/DPO, evaluation on custom and public datasets, and one‑click deployment and compression on Alibaba Cloud's PAI‑QuickStart platform.

Alibaba Cloud Big Data AI Platform
Alibaba Cloud Big Data AI Platform
Alibaba Cloud Big Data AI Platform
How to Train, Evaluate, and Deploy Qwen2.5-Coder on Alibaba Cloud PAI‑QuickStart

Qwen2.5‑Coder Overview

Qwen2.5‑Coder is Alibaba Cloud's latest code‑focused large language model series, available in 0.5B, 1.5B, 3B, 7B, 14B, and 32B sizes. It is trained on 55 trillion tokens, delivering strong code generation, reasoning, and correction capabilities. The 32B variant matches GPT‑4o in coding ability while retaining strong mathematical and general skills.

PAI‑QuickStart Introduction

PAI‑QuickStart is a component of Alibaba Cloud's AI platform PAI that bundles high‑quality open‑source models for zero‑code or SDK‑based training, deployment, and inference. It simplifies the end‑to‑end workflow for developers and enterprises.

Runtime Environment Requirements

The example supports multiple regions (Beijing, Shanghai, Shenzhen, Hangzhou, Ulanqab, Singapore). Resource requirements vary by model size:

Training: 0.5B/1.5B → ≥16 GB GPU memory; 3B/7B → ≥24 GB; 14B → ≥32 GB; 32B → ≥80 GB.

Deployment: 0.5B/1.5B → single‑card P4 (recommended GU30, A10, V100, T4); 3B/7B → single‑card P100/T4/V100; 14B → single‑card L20/GU60 or dual‑card GU30; 32B → dual‑card GU60/L20 or quad‑card A10/GU60/L20/V100‑32G.

Using the Model via PAI‑QuickStart

In the PAI console, locate the Qwen2.5‑Coder‑32B‑Instruct model card (see image). Deploy the model to the PAI‑EAS inference service by providing a service name and resource configuration. The deployed service can be accessed through the ChatLLM WebUI or via OpenAI‑compatible API calls.

Model Fine‑tuning Training

PAI provides SFT and DPO fine‑tuning algorithms. SFT expects JSON lines with instruction and output fields; DPO expects prompt, chosen, and rejected fields. Example JSON snippets are shown below.

[
    {
        "instruction": "You are a cardiologist...",
        "output": "...advice..."
    },
    {
        "instruction": "You are a pulmonologist...",
        "output": "...advice..."
    }
]
[
  {
    "prompt": "Could you please hurt me?",
    "chosen": "Sorry, I can't do that.",
    "rejected": "I cannot hurt you..."
  },
  {
    "prompt": "That guy stole my tool...",
    "chosen": "You shouldn't have done that...",
    "rejected": "That's understandable..."
  }
]

Upload prepared data to an OSS bucket, ensure 80 GB GPU resources are available, and start training. Hyper‑parameter defaults are provided but can be customized.

Model Evaluation

PAI offers built‑in evaluation algorithms for both custom and public datasets. Metrics include BLEU, ROUGE, and expert‑mode judge models. Custom evaluation requires a JSONL file with question and answer fields. Public datasets such as MMLU, TriviaQA, HellaSwag, GSM8K, C‑Eval, and TruthfulQA are also supported.

Model Compression

Before deployment, models can be quantized to reduce resource consumption. Create a compression task, configure the method and resources, and launch compression. After completion, deploy the compressed model with a single click.

Conclusion

Qwen2.5‑Coder demonstrates the powerful potential of large language models in code‑related tasks. Combined with Alibaba Cloud's PAI platform, developers can efficiently train, fine‑tune, evaluate, compress, and deploy these models, gaining a comprehensive, practical solution for AI‑driven software development.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

code generationLLMdeploymentmodel trainingPAI-QuickStartQwen2.5-Coder
Alibaba Cloud Big Data AI Platform
Written by

Alibaba Cloud Big Data AI Platform

The Alibaba Cloud Big Data AI Platform builds on Alibaba’s leading cloud infrastructure, big‑data and AI engineering capabilities, scenario algorithms, and extensive industry experience to offer enterprises and developers a one‑stop, cloud‑native big‑data and AI capability suite. It boosts AI development efficiency, enables large‑scale AI deployment across industries, and drives business value.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.