Artificial Intelligence 10 min read

Overview of Open‑Source Large Language Models: Llama 2, ChatGLM 2, Usage, Fine‑Tuning and Comparison

The article reviews the rapid evolution of open‑source large language models, detailing Meta’s Llama 2 series and Tsinghua’s ChatGLM 2, their enhanced capabilities such as RLHF, larger context windows, safety‑usefulness trade‑offs, performance gains, download and fine‑tuning procedures, and how they increasingly rival proprietary models like GPT‑4.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
Overview of Open‑Source Large Language Models: Llama 2, ChatGLM 2, Usage, Fine‑Tuning and Comparison

This article introduces the rapid evolution of open‑source large language models (LLMs), focusing on Meta's Llama 2 series and Tsinghua's ChatGLM 2. It summarizes the main capabilities of these models, their training methods, and the differences compared with earlier versions and other open‑source models such as Falcon and MPT.

Model capabilities

Llama 2 is released in three parameter sizes (7B, 13B, 70B). Compared with Llama 1, it adds Reinforcement Learning from Human Feedback (RLHF) and uses supervised fine‑tuning to create the Llama‑2‑chat variant. The chat model is trained with RLHF, including rejection sampling and Proximal Policy Optimization (PPO), which improves code generation, mathematical reasoning, and context understanding.

ChatGLM 2 retains the strengths of the first‑generation ChatGLM while extending the context window to 32K tokens, adopting Multi‑Query Attention for lower inference memory, and achieving significant gains on benchmarks (MMLU +23 %, CEval +33 %, GSM8K +571 %).

Safety and usefulness trade‑off

Llama 2 employs two separate reward models: one optimizes usefulness (Helpativity RM) and the other optimizes safety (Safety RM), addressing the common conflict between these two objectives.

Performance comparison

A table in the original article shows that Llama 2‑7B improves factuality and informativeness by 21.37 % over Llama 1‑7B while reducing toxic content by 7.61 %.

Model download and deployment

To obtain Llama 2, users can apply for a download link at Meta's official page or via Hugging Face after approval. For ChatGLM 2, the GitHub repository THUDM/ChatGLM2-6B provides the code and model weights.

Example commands for cloning and installing the ChatGLM 2 repository:

git clone https://github.com/THUDM/ChatGLM2-6B

cd ChatGLM2-6B

pip install -r requirements.txt

Running a chat completion example with Llama 2‑7B‑chat:

torchrun --nproc_per_node 1 example_chat_completion.py \

--ckpt_dir llama-2-7b-chat/ \

--tokenizer_path tokenizer.model \

--max_seq_len 512 --max_batch_size 4

Evaluation and user experience

The article presents several Q&A experiments comparing ChatGLM 2 with GPT‑4 (Bing) on tasks such as job recommendation, geometry calculation, code generation, language understanding, and role‑play. Overall, ChatGLM 2 shows noticeable improvements over its predecessor, especially in code generation and multi‑turn dialogue, though mathematical reasoning still lags behind GPT‑4.

Additional fine‑tuning instructions are provided, including dataset formatting and a reference script train/sft/finetune.sh . The article also discusses merging LoRA adapters with the base Llama model for customized downstream tasks.

Conclusion

Open‑source LLMs like Llama 2 and ChatGLM 2 are rapidly closing the gap with proprietary models such as GPT‑3.5/4. They enable developers to train domain‑specific models and foster broader AI adoption.

AImodel fine-tuningChatGLM2Llama2open-source LLM
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.