Are Huawei’s Pangu Pro MoE and Alibaba’s Qwen‑2.5 14B Model Really Identical?
A recent GitHub study alleges that Huawei's Pangu Pro MoE and Alibaba's Qwen‑2.5 14B share an almost identical parameter structure with a 0.927 attention‑parameter correlation, prompting plagiarism accusations, while Huawei counters with a claim of novel MoGE architecture and strict open‑source compliance.
Recently a GitHub‑posted study sparked debate by claiming that Huawei’s Pangu Pro MoE and Alibaba’s Qwen‑2.5 14B models share an “astonishingly similar” parameter structure.
The author’s empirical analysis reported an average attention‑parameter correlation of 0.927 between the two models, far exceeding typical ranges and leading some netizens to suspect plagiarism.
On July 5, Huawei’s Pangu team issued a statement asserting that the open‑source Pangu Pro MoE was built and trained on the Ascend hardware platform, featuring a novel Grouped Mixture‑of‑Experts (MoGE) architecture that addresses load‑balancing in large‑scale distributed training, and is not derived from other vendors’ models.
The statement also emphasized that certain base‑component code was inspired by open‑source practices, with proper license compliance and clear copyright notices, reflecting a commitment to open‑source collaboration and respect for third‑party intellectual property.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
