Unlocking GLM & ChatGLM: Deep Dive into MindSpore Large‑Model Techniques

The MindSpore Season 2 open class offers a comprehensive overview of GLM to ChatGLM architectures, positional‑embedding strategies, stable training optimizations, and step‑by‑step instructions for deploying large language models with Ascend, ModelArts, and MindSpore Transformers, while previewing upcoming multimodal remote‑sensing sessions.

Huawei Cloud Developer Alliance
Huawei Cloud Developer Alliance
Huawei Cloud Developer Alliance
Unlocking GLM & ChatGLM: Deep Dive into MindSpore Large‑Model Techniques

MindSpore’s Season 2 open class focuses on large‑model techniques, covering the transition from GLM to ChatGLM and deployment on Ascend + ModelArts + MindSpore.

GLM Model Structure

The LLM evolution tree is examined, including Autoregressive, Autoencoding, and Encoder‑Decoder structures.

Autoregressive Blank Infilling

GLM combines the Encoder‑Decoder paradigm with Autoregressive and Autoencoding capabilities, enabling NLU, conditional generation, and unconditional generation within a single model.

Positional Embedding Optimizations for Stable Training

Key techniques include Float32 softmax, embedding‑gradient scaling, DeepNorm (Post LayerNorm), RoPE (rotary position encoding), and gated‑linear units (GLU).

Absolute Positional Embeddings

For each token vector x_k, a position vector p_k (dependent only on position index k) is added.

Learned Positional Embeddings

Parameters are updated during training, offering adaptability but limited to the maximum sequence length.

Relative Positional Embeddings

Relative distance m‑n between positions is injected into the self‑attention matrix, allowing the model to handle variable‑length inputs; however, it slows training and inference for long sequences.

Rotary Positional Embeddings

Rotary embeddings encode absolute positions with a rotation matrix and explicitly introduce relative position dependence into the self‑attention formula, preserving relative information, enabling efficient caching, and allowing attention decay with distance.

ChatGLM Evolution Roadmap

Running Inference with MindSpore Transformers

Steps to run a ChatGLM inference demo:

Create an OpenI account and start a cloud‑brain NPU task (or set up a GPU environment).

Install MindSpore (see https://www.mindspore.cn/install).

Install MindSpore Transformers.

Clone the MindFormers repository and build it.

If using MindSpore 1.10, install Transformers 0.6.

Clone the course code repository, download the checkpoint and tokenizer files, and run the demo script.

git clone -b dev https://gitee.com/mindspore/mindformers.git
cd mindformers
bash build.sh
# For MindSpore 1.10 use Transformers 0.6
git clone https://github.com/mindspore-courses/step_into_llm.git
cd step_into_llm/Season2.step_into_llm/01.ChatGLM/
wget https://ascend-repo-modelzoo.obs.cn-east-2.myhuaweicloud.com/XFormer_for_mindspore/glm/glm_6b.ckpt
wget https://ascend-repo-modelzoo.obs.cn-east-2.myhuaweicloud.com/XFormer_for_mindspore/glm/ice_text.model
python cli_demo.py

Next Session Preview

On October 28, Sun Xian (Researcher, Chinese Academy of Sciences) will present “Multimodal Remote‑Sensing Intelligent Interpretation Foundation Model,” discussing challenges, technical routes, and typical application scenarios of foundation models in remote sensing.

Note: The session time has been adjusted to 18:30‑20:00.

Artificial IntelligenceLarge Language ModelChatGLMGLMMindSporePositional Embedding
Huawei Cloud Developer Alliance
Written by

Huawei Cloud Developer Alliance

The Huawei Cloud Developer Alliance creates a tech sharing platform for developers and partners, gathering Huawei Cloud product knowledge, event updates, expert talks, and more. Together we continuously innovate to build the cloud foundation of an intelligent world.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.