Meituan Quietly Releases New Large Model: No Announcement, No Open Source

According to monitoring by Dongcha Beating, Meituan has launched a new model, LongCat-2.0-Preview, on the LongCat API platform, with the update log dated April 20. However, Meituan has not released any official announcement or technical report to date. Previously, each model in the LongCat series (Flash-Chat, Flash-Thinking, Flash-Lite, Flash-Omni, Next) was accompanied by an official blog, technical report, and was simultaneously open-sourced on Hugging Face and GitHub. The update log for 2.0-Preview contains no open-source links and provides services solely through the API. The update log lists three capabilities: development for agents, native support for tool invocation, multi-step reasoning, and long-context tasks; proficiency in code generation, automated workflows, and complex instruction execution; and deep integration with Claude Code, OpenClaw, OpenCode, and Kilo Code. On April 24, several media outlets cited informed sources reporting more details: the model’s total parameters exceed one trillion, utilizing a MoE architecture, and supports a 1M context window, with parameter counts roughly equivalent to those of DeepSeek V4 released on the same day. Sources indicated that LongCat-2.0-Preview’s training and inference were completed entirely on domestic computing clusters, utilizing 50,000 to 60,000 domestic accelerator cards, marking the largest-scale training task completed with domestic computing power to date. During the testing period, a daily free quota of 10 million tokens was provided.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments