Meituan LongCat-2.0-Preview quietly launched: no announcement, no open source

robot
Abstract generation in progress

AIMPACT news, April 28 (UTC+8), with more details: the model’s total parameters have surpassed 1 trillion, adopts a MoE architecture, and supports a 1M context window. Its parameter count is basically the same as DeepSeek V4 released on the same day. Insiders say that the entire training and inference process of LongCat-2.0-Preview was completed using a domestic computing power cluster, deploying 50,000 to 60,000 domestically produced acceleration cards—making it the largest-scale training task completed with domestic computing power to date. During the testing period, a free quota of 10 million tokens was provided daily. (Source: BlockBeats)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments