AI Algorithm Path
AI Algorithm Path
Jul 29, 2025 · Artificial Intelligence

Why GLM‑4.5 Sets a New Benchmark for Open‑Source Large Language Models

GLM‑4.5 and its lightweight Air variant, featuring a deep‑layered MoE design, grouped‑query attention, and dual inference modes, achieve third‑place overall on 12 hard‑core benchmarks, excel in web‑browsing and tool‑calling with a 90.6 % success rate, and introduce novel training tricks such as the Muon optimizer and Slime RL framework.

AIGLM-4.5MoE
0 likes · 8 min read
Why GLM‑4.5 Sets a New Benchmark for Open‑Source Large Language Models
Baobao Algorithm Notes
Baobao Algorithm Notes
Jan 11, 2025 · Artificial Intelligence

Why Phi‑4’s 14B Model Outperforms GPT‑4 on STEM and Reasoning Tasks

Microsoft Research’s Phi‑4 model, a 14‑billion‑parameter LLM, leverages extensive synthetic data, advanced tokenization, and a two‑stage training pipeline to achieve superior performance on STEM question answering, long‑context reasoning, and safety benchmarks, rivaling larger models like GPT‑4.

AI safetyPhi-4benchmarking
0 likes · 15 min read
Why Phi‑4’s 14B Model Outperforms GPT‑4 on STEM and Reasoning Tasks
NewBeeNLP
NewBeeNLP
Dec 2, 2024 · Artificial Intelligence

What Are Today’s Unified Generation-and-Understanding Multimodal Model Architectures?

This article surveys current unified generation-and-understanding multimodal large-model architectures, compares LLM-centric and LLM-plus-diffusion designs, extracts common insights, details large-scale training tricks from models like Emu3, Chameleon and Janus, and outlines open research directions for visual encoders.

diffusionlarge language modelsmultimodal
0 likes · 5 min read
What Are Today’s Unified Generation-and-Understanding Multimodal Model Architectures?