Cursor's Aman Sanger Addresses Kimi Model Use in Composer 2

Cursor's Aman Sanger Addresses Kimi Model Use in Composer 2

Synopsis

Users had speculated that Composer 2, a new model designed to improve efficiency in software development workflows, was built on an external base model that was not disclosed at launch. In an X post, cofounder Aman Sanger acknowledged that they had indeed missed mentioning the Kimi base in their blog and added that the company will correct this in future releases.
ETtech
Aman Sanger, cofounder, Cursor
Artificial intelligence (AI) coding startup Cursor is facing scrutiny over its newly launched Composer 2 model after users speculated that the system is built on an external base model that was not disclosed at launch.

Cursor unveiled Composer 2 on March 19, a new model designed to improve efficiency in software development workflows. However, Chinese AI startup Moonshot AI had publicly endorsed Cursor’s newly launched Composer 2 on Saturday.


Further, in a post on X, Cursor cofounder Aman Sanger also confirmed that the company selected Kimi K2.5 after evaluating multiple base models.

Moonshot AI develops and owns the Kimi family of models, including Kimi K2.5.

“We’ve evaluated a lot of base models on perplexity-based evals, and Kimi K2.5 proved to be the strongest,” Sanger said.

He added that Composer 2 is built on top of the base model with further training, fine-tuning using reinforcement learning, and supporting systems that help it run efficiently.

Sanger acknowledged that Cursor did not initially disclose its use of the Kimi base model in its launch blog.

“It was a miss to not mention the Kimi base in our blog from the start,” he said, adding that the company plans to correct this in future releases.


Cursor operates in a competitive landscape alongside established players such as OpenAI and Anthropic, as well as a growing number of specialised startups building coding-focussed AI tools.

Sanger, who also leads the company’s research efforts, had said earlier that Composer 2 is trained specifically on coding-related data. The approach focusses on building a smaller, more specialised model optimised for software engineering tasks.

Composer 2 features and pricing

Composer 2 is priced at $0.50 per million input tokens and $2.50 per million output tokens, positioning it competitively among coding-focussed AI models.

The model is designed for long-horizon coding tasks, enabling it to handle multi-step software problems such as debugging, testing and implementation across larger codebases, the company said in a blog post.

Cursor said Composer 2 shows measurable gains over earlier versions, reporting 61.3 on CursorBench, 61.7 on Terminal Bench 2.0, and 73.7 on SWE-bench Multilingual.

These benchmarks evaluate performance across areas such as coding accuracy, instruction following, and the capability of the AI model to perform real-world software engineering tasks.

How it stacks up against OpenAI, Anthropic

In Terminal Bench 2.0, Composer 2 outperformed several competing models. Cursor said OpenAI’s GPT model scored 75.1%, while Anthropic’s Opus 4.6 recorded 58.0%, placing it below Composer 2 on that benchmark.

However, comparisons across models may vary depending on evaluation setup, datasets, and tokenisation methods.

Cursor also noted that tokens used by Anthropic’s models are approximately 15% smaller than those used by Composer and GPT models, which can affect cost and performance comparisons.

Composer is built as a mixture-of-experts model trained with reinforcement learning in real development environments. This training approach differs from many competing systems, which are typically trained on broader datasets and later adapted for coding tasks.