AI coding startup Cursor has acknowledged that its new Composer 2 model was built on top of Moonshot AI’s Kimi K2.5, a Chinese open model, after online sleuths linked the tool’s behavior and identifiers back to Kimi’s codebase. The admission has ignited a wider debate over disclosure, licensing, and the geopolitics of U.S. companies quietly relying on Chinese AI foundations in commercial products.
How Cursor’s Composer 2 came under scrutiny
Earlier this month, Cursor unveiled Composer 2, pitching it to developers as a major step up in “frontier-level coding intelligence” for building and maintaining software. The launch materials highlighted improvements on long-horizon coding tasks and advertised competitive pricing, but made no mention that the core of the system rested on Kimi K2.5, an open model from Beijing-based Moonshot AI.
That silence did not last long. An independent analyst posting under the name Fynn examined how Composer 2 behaved and claimed it was “just Kimi 2.5” with additional reinforcement learning layered on top. As supposed proof, Fynn and other observers pointed to internal model identifiers surfaced in logs and traffic, such as “kimi-k2p5-rl” style labels, and argued that the tokenizer and response patterns closely matched those of Kimi K2.5.
On social media, one critic scoffed that Cursor should “at least rename the model ID” if it wanted to present Composer 2 as a fully in‑house frontier system. The claims landed with extra force because Cursor is not a small experimental project but a heavily funded U.S. startup that has raised billions in capital and is reportedly generating over $2 billion in annualized revenue from its AI coding tools.
Cursor confirms Kimi base, defends its training
As speculation mounted, Cursor leaders stepped forward to explain how Composer 2 was actually built. Co‑founder Aman Sanger confirmed that the team had chosen Kimi K2.5 as the base model for Composer 2 after benchmarking it against multiple alternatives.
“We’ve evaluated a lot of base models on perplexity-based evals, and Kimi K2.5 proved to be the strongest,” Sanger said, adding that the company then ran its own rounds of continued pretraining and reinforcement learning to optimize the model for coding workloads. Internal figures shared by a Cursor employee suggest that roughly a quarter of the pretraining in Composer 2 comes from the original Kimi K2.5 base, with the remainder attributed to Cursor’s own training pipeline.
Because of this extra work, Cursor argues, Composer 2 now behaves differently from Kimi on coding benchmarks and long, multi-step tasks, and should be seen as a distinct model rather than a simple rebrand. At the same time, Sanger acknowledged that the company’s launch communication fell short, saying Cursor had been “unclear” about the Kimi connection in its initial messaging and promising “corrections in future updates.”
Cursor also stresses that its use of Kimi is not a case of scraping or unauthorized copying. The company accesses Kimi‑k2.5 through Fireworks AI, an inference and RL hosting platform, under what has been described as an authorized commercial partnership. That setup is central to Cursor’s argument that it remains compliant with Kimi’s license while layering its own proprietary training and product experience on top.
Moonshot AI calls it a win for open models
Moonshot AI, the Chinese startup behind Kimi, has taken a notably positive public stance on the episode. In a statement shared via its Kimi account, the company congratulated Cursor on Composer 2 and explicitly framed Kimi K2.5 as the foundation of the new coding tool.
“Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi‑k2.5 provide the foundation,” the Kimi team wrote. “Seeing our model integrated effectively through Cursor’s continued pretraining & high-compute RL training is the open model ecosystem we love to support.”
Moonshot also clarified how the two companies are working together in practice. “Cursor accesses Kimi‑k2.5 via @FireworksAI_HQ hosted RL and inference platform as part of an authorized commercial partnership,” the Kimi account said, positioning the arrangement as a model case of open technology being industrialized by third-party developers.
The response reflects Moonshot’s broader strategy with Kimi K2.x, which it has presented as a high-performance open model family meant to rival Western systems in coding, math, and agentic tasks. The company has previously claimed that Kimi models can outperform or match many mainstream open models in coding benchmarks while remaining significantly cheaper to run.
Licensing, disclosure and a question of trust
Where critics see the biggest problem is not in the technical choice to build on Kimi, but in the way Cursor communicated or failed to communicate that choice. Kimi K2.5 is distributed under an open license that, according to commentary from multiple reports, expects clear attribution for large commercial deployments above defined revenue and user thresholds.
Given Cursor’s reported scale, detractors argue that the company should have prominently credited Kimi from the outset rather than allowing the impression that Composer 2 was solely an in‑house “frontier” model. For enterprise developers deciding whether to trust Composer 2 with proprietary code, the underlying question is about transparency: knowing what model is running, who trained it, and under what legal and security regime.
Some customers and commentators have also raised concerns about the optics and risk of a U.S. coding platform relying on Chinese AI foundations amid growing geopolitical tensions and regulatory scrutiny over cross‑border tech dependencies. Others counter that what ultimately matters is the concrete data path where inference runs, how logs are handled, and whether there are contractual limits on training with customer code rather than the nationality of the team that created the base model.
Cursor has previously emphasized that it does not train on private customer repositories without explicit consent and that it offers enterprise-grade guarantees around data handling, though these assurances have not yet been re‑stated in detail in the specific context of Composer 2’s Kimi foundations. Industry watchers note that the company may now face pressure to spell out those safeguards more clearly, especially for risk‑sensitive sectors.
A defining test for the open model ecosystem
Beyond this single product launch, the Cursor–Kimi episode is quickly becoming a reference point for how the open model ecosystem will operate as serious money flows into derivative systems. On one side, Kimi’s enthusiastic public endorsement of its model being used as a base highlights the upside: open models can serve as building blocks for sophisticated, revenue-generating products without every startup having to train from scratch.
On the other, the backlash directed at Cursor signals that open licensing does not erase expectations around attribution and clarity especially when a company is raising capital and marketing itself on the strength of its “own” models. As more tools like Composer 2 stack proprietary RL and fine-tuning on top of open foundations, legal and reputational lines between “derived from” and “independent” will likely be tested again and again.
For now, Cursor says it has learned a hard lesson about disclosure and has committed to being more explicit about base models in future releases. Moonshot AI, meanwhile, is using the moment to spotlight Kimi as proof that Chinese open models can power some of the most advanced coding assistants on the market, even if their role is only fully acknowledged after the fact.
Comments