Is Cursor’s Composer 2 Powered by Kimi? The Truth Is More Complex
A developer uncovered that Cursor’s Composer 2 actually runs on the Kimi K2.5 model with reinforcement learning, prompting a rapid licensing dispute that ended with official confirmation and highlights the opaque yet collaborative nature of today’s open AI model ecosystem.
Cursor announced Composer 2, a code‑focused model priced at a fraction of GPT‑5, which quickly attracted attention.
While debugging the Cursor API, developer @fynnso discovered the model identifier kimi-k2p5-rl-0317-s515-fast, literally meaning “Kimi K2.5 + RL”.
Cursor’s marketing had described Composer 2 as “first to continue pre‑training the base model and combine reinforcement learning”, but omitted any mention of Kimi as the base.
Moonshot’s tokenizer team, led by Du Yulun, tested Composer 2’s tokenizer and found it identical to Kimi’s, concluding that the model is a further‑trained version of Kimi. They publicly questioned Cursor on licensing compliance and payment via a tweet to co‑founder Michael Truell.
Kimi K2.5 is released under a modified MIT license that requires any commercial product with over 100 million MAU or $20 million revenue to prominently display “Kimi K2.5”. Cursor’s valuation of $29.3 billion clearly exceeds this threshold.
Under intense public scrutiny, Cursor co‑founder Lee Robinson admitted that Composer 2 is indeed built on Kimi K2.5 with reinforcement‑learning fine‑tuning and that the partnership complies with licensing terms through partner agreements such as Fireworks.
The official Moonshot account later posted a congratulatory statement, confirming that Kimi‑K2.5 provided the foundation and praising the integration as a win for the open‑model ecosystem.
The dispute resolved amicably within 24 hours, illustrating both the blurred boundaries of base‑model provenance and the rapid, collaborative dynamics of the open AI model community.
AI Engineering
Focused on cutting‑edge product and technology information and practical experience sharing in the AI field (large models, MLOps/LLMOps, AI application development, AI infrastructure).
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
