How MiniMax M2.7 Is Pioneering Self‑Evolving AI Models

MiniMax’s open‑source M2.7 model, released in April 2026, demonstrates the first self‑evolving AI agent that autonomously updates its memory, learns new skills, and optimizes its own training loop, achieving up to 30% performance gains and leading benchmark scores across programming, ML automation, and productivity tasks.

AI Large-Model Wave and Transformation Guide
AI Large-Model Wave and Transformation Guide
AI Large-Model Wave and Transformation Guide
How MiniMax M2.7 Is Pioneering Self‑Evolving AI Models

01 | Milestone: AI’s First Deep Self‑Iteration

In April 2026 MiniMax open‑sourced the M2.7 model, the first self‑evolving agent that can update its own memory, acquire dozens of complex skills, and refine its learning pipeline based on experimental results.

Concrete case: an internal M2.7 version executed >100 autonomous iteration cycles while optimizing a programming scaffold, analysing failure traces, modifying code, evaluating outcomes, and deciding whether to keep or roll back changes, achieving a 30 % performance improvement without human intervention.

Illustration of self‑evolving AI
Illustration of self‑evolving AI

02 | Hard Data: What Makes M2.7 Strong?

Programming Ability – On Par with Top Closed‑Source Models

SWE‑Pro 56.22 %: matches GPT‑5.3‑Codex.

SWE‑Bench Verified 78 %: far ahead of Claude Opus 4.6 (55 %).

Terminal Bench 2 57.0 %: strong system‑level engineering understanding.

Machine‑Learning Automation – Only Behind Two Giants

On the MLE Bench Lite (22 real ML competitions) M2.7 earned a 66.6 % medal rate (9 gold, 5 silver, 1 bronze), trailing only Opus‑4.6 and GPT‑5.4.

Productivity – Best Among Open‑Source Models

GDPval‑AA ELO 1495: highest among open‑source weights, surpassing GPT‑5.3.

97 % skill compliance: stable across 40 complex >2000‑token scenarios.

Cost Advantage – 50‑60× Cheaper

Compared with Claude Opus 4.6, M2.7’s input cost is 50 × lower and output cost 60 × lower.

03 | Technical Reveal: How Self‑Evolution Works

The self‑evolution loop builds on MiniMax’s internal OpenClaw framework. The core cycle runs autonomously for >100 rounds, during which the model discovers optimisations such as:

Sampling Parameter Auto‑Tuning: systematic search for optimal temperature, frequency penalty, etc., outperforming manual tuning.

Self‑Discovered Workflows: after fixing a bug the agent automatically scans other files for the same pattern without prior instruction.

Infinite‑Loop Detection: built‑in self‑check mechanisms prevent the agent from getting stuck on complex tasks.

MiniMax estimates M2.7 handled 30‑50 % of routine ML‑engineer tasks during training, requiring researcher intervention only for critical decisions.

OpenClaw architecture diagram
OpenClaw architecture diagram
Self‑evolution loop flowchart
Self‑evolution loop flowchart

04 | Implications

From “Human‑Trains‑Model” to “Model‑Trains‑Model”

Current self‑evolution focuses on the agent scaffolding layer, not the model weights. If the approach scales, future versions could iterate far faster.

Qualitative Leap in Agentic Capability

M2.7 functions as an end‑to‑end engineer, delivering full project deliveries and reducing real‑world incident recovery times to under three minutes.

Open‑Source Ecosystem Impact

Despite a modified MIT license that restricts commercial use, the breakthrough has spurred rapid 0‑Day adaptations on domestic compute platforms such as Huawei Ascend and MuXi.

Technical Specs at a Glance

Architecture: 230 billion‑parameter MoE, activates 10 billion per inference.

Context window: 200 k tokens, max output 130 k tokens.

Pricing: $0.30/$1.20 per million tokens.

Open‑source repository: https://huggingface.co/MiniMaxAI/MiniMax-M2.7

Suggested inference parameters:

temperature=1.0
top_p=0.95
top_k=40

Default system prompt:

You are a helpful assistant. Your name is MiniMax-M2.7 and is built by MiniMax.
open-sourceLarge Language ModelbenchmarkAgentic AIcost efficiencyself-evolving AI
AI Large-Model Wave and Transformation Guide
Written by

AI Large-Model Wave and Transformation Guide

Focuses on the latest large-model trends, applications, technical architectures, and related information.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.