Alibaba Cloud Big Data AI Platform
Alibaba Cloud Big Data AI Platform
Feb 24, 2025 · Artificial Intelligence

Unlock Data+AI Fusion: Fine‑Tune Multimodal Models on DataWorks with GPU‑Ready Notebooks

This tutorial shows how to use Alibaba Cloud DataWorks' serverless GPU resource groups together with the open‑source LLaMA‑Factory framework to fine‑tune the Qwen2‑VL‑2B multimodal model for tourism‑domain Q&A, covering environment setup, dataset preparation, parameter configuration, training, and interactive inference.

DataWorksGPULLaMA-Factory
0 likes · 10 min read
Unlock Data+AI Fusion: Fine‑Tune Multimodal Models on DataWorks with GPU‑Ready Notebooks
Baobao Algorithm Notes
Baobao Algorithm Notes
Sep 5, 2024 · Artificial Intelligence

Why Small LLMs Are the Secret Weapon for Scaling Large Model Research

The article explains how homologous small language models—trained on the same tokenizer and data as their large counterparts—serve as cheap, fast experimental platforms that can predict large‑model performance, guide pre‑training decisions, and support techniques like distillation and reward modeling.

AI researchLLM scalingQwen2
0 likes · 13 min read
Why Small LLMs Are the Secret Weapon for Scaling Large Model Research
Java Tech Enthusiast
Java Tech Enthusiast
Jul 12, 2024 · Artificial Intelligence

Why Alibaba’s Qwen‑2 Is Outperforming Global LLMs and What It Means for AI

After OpenAI halted API access in China, Alibaba’s Tongyi Qwen‑2 quickly rose to the top of global open‑source LLM leaderboards, surpassing Meta’s Llama‑3 and other contenders, with detailed benchmark scores, performance gains over previous versions, and implications for China’s AI ecosystem.

AI benchmarkAlibabaChina AI
0 likes · 5 min read
Why Alibaba’s Qwen‑2 Is Outperforming Global LLMs and What It Means for AI
Alibaba Cloud Big Data AI Platform
Alibaba Cloud Big Data AI Platform
Jul 8, 2024 · Artificial Intelligence

How to Fine‑Tune Qwen2 with Direct Preference Optimization on Alibaba Cloud PAI

This guide explains the Direct Preference Optimization (DPO) algorithm for aligning large language models, demonstrates its advantages over RLHF, and provides a step‑by‑step tutorial on using Alibaba Cloud’s PAI‑QuickStart to fine‑tune the open‑source Qwen2 series, including data preparation, hyper‑parameter settings, training, deployment, and API usage.

AI alignmentAlibaba CloudDPO
0 likes · 14 min read
How to Fine‑Tune Qwen2 with Direct Preference Optimization on Alibaba Cloud PAI
Baobao Algorithm Notes
Baobao Algorithm Notes
Jun 6, 2024 · Artificial Intelligence

What’s New in Qwen2? A Deep Dive into the Latest Open‑Source LLMs

Qwen2 introduces five new pre‑trained and instruction‑tuned LLM sizes, expands multilingual training to 27 languages, boosts code and math abilities, supports up to 128K context tokens, and achieves leading benchmark results across NLU, code, math, and safety, with detailed model specs and evaluation data provided.

AIQwen2multilingual
0 likes · 11 min read
What’s New in Qwen2? A Deep Dive into the Latest Open‑Source LLMs